{ "paper_id": "I08-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:52.969685Z" }, "title": "Automatic Extraction of Briefing Templates", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": { "addrLine": "5000 Forbes Avenue Pittsburgh", "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "dipanjan@cs.cmu.edu" }, { "first": "Mohit", "middle": [], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": { "addrLine": "5000 Forbes Avenue Pittsburgh", "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "mohitkum@cs.cmu.edu" }, { "first": "Alexander", "middle": [ "I" ], "last": "Rudnicky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": { "addrLine": "5000 Forbes Avenue Pittsburgh", "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "An approach to solving the problem of automatic briefing generation from non-textual events can be segmenting the task into two major steps, namely, extraction of briefing templates and learning aggregators that collate information from events and automatically fill up the templates. In this paper, we describe two novel unsupervised approaches for extracting briefing templates from human written reports. Since the problem is non-standard, we define our own criteria for evaluating the approaches and demonstrate that both approaches are effective in extracting domain relevant templates with promising accuracies.", "pdf_parse": { "paper_id": "I08-1035", "_pdf_hash": "", "abstract": [ { "text": "An approach to solving the problem of automatic briefing generation from non-textual events can be segmenting the task into two major steps, namely, extraction of briefing templates and learning aggregators that collate information from events and automatically fill up the templates. In this paper, we describe two novel unsupervised approaches for extracting briefing templates from human written reports. Since the problem is non-standard, we define our own criteria for evaluating the approaches and demonstrate that both approaches are effective in extracting domain relevant templates with promising accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automated briefing generation from non-textual events is an unsolved problem that currently lacks a standard approach in the NLP community. Broadly, it intersects the problem of language generation from structured data and summarization. The problem is relevant in several domains where the user has to repeatedly write reports based on events in the domain, for example, weather reports (Reiter et al., 2005) , medical reports (Elhadad et al., 2005) , weekly class project reports (Kumar et al., 2007) and so forth. On observing the data from these domains, we notice a templatized nature of report items. Examples (1)-(3) demonstrate equivalents in a particular domain (Reiter et al., 2005 In each sentence, the phrases in square brackets at the same relative positions form the slots that take up different values at different occasions. The corresponding template is shown in (4) with slots containing their respective domain entity types. Instantiations of (4) may produce (1)-(3) and similar sentences. This kind of sentence structure motivates an approach of segmenting the problem of closed domain summarization into two major steps of automatic template extraction and learning aggregators, which are pattern detectors that assimilate information from the events, to populate these templates. In the current work we address the first problem of automatically extracting domain templates from human written reports. We take a two-step approach to the problem; first, we cluster report sentences based on similarity and second, we extract template(s) corresponding to each cluster by aligning the instances in the cluster. We experimented with two independent, arguably complementary techniques for clustering and aligning -a predicate argument based approach that extracts more general templates containing one predicate and a ROUGE (Lin, 2004) based approach that can extract templates containing multiple verbs. As we will see below, both approaches show promise.", "cite_spans": [ { "start": 388, "end": 409, "text": "(Reiter et al., 2005)", "ref_id": "BIBREF16" }, { "start": 428, "end": 450, "text": "(Elhadad et al., 2005)", "ref_id": "BIBREF4" }, { "start": 482, "end": 502, "text": "(Kumar et al., 2007)", "ref_id": "BIBREF9" }, { "start": 671, "end": 691, "text": "(Reiter et al., 2005", "ref_id": "BIBREF16" }, { "start": 1841, "end": 1852, "text": "(Lin, 2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been instances of template based summarization in popular Information Extraction (IE) evaluations like MUC (Marsh & Perzanowski, 1998; Onyshkevych, 1994) and ACE (ACE, 2007) where hand engineered slots were to be filled for events in text; but the focus lay on template filling rather than their creation. (Riloff, 1996) describes an interesting work on the generation of extraction patterns from untagged text, but the analysis is syntactic and the patterns do not resemble the templates that we aim to extract. (Yangarber et al., 2000) describe another system called ExDisco, that extracts event patterns from un-annotated text starting from seed patterns. Once again, the text analysis is not deep and the patterns extracted are not sentence surface forms. (Collier, 1998) proposed automatic domain template extraction for IE purposes where MUC type templates for particular types of events were constructed. The method relies on the idea from (Luhn, 1958) where statistically significant words of a corpus were extracted. Based on these words, sentences containing them were chosen and aligned using subject-object-verb patterns. However, this method did not look at arbitrary syntactic patterns. (Filatova et al., 2006) improved the paradigm by looking at the most frequent verbs occurring in a corpus and aligning subtrees containing the verb, by using the syntactic parses as a similarity metric. However, long distance dependencies of verbs with constituents were not looked at and deep semantic analysis was not performed on the sentences to find out similar verb subcategorization frames. In contrast, in our predicate argument based approach we look into deeper semantic structures, and align sentences not only based on similar syntactic parses, but also based on the constituents' roles with respect to the main predicate. Also, they relied on typical Named Entities (NEs) like location, organization, person etc. and included another entity that they termed as NUMBER. However, for specific domains like weather forecasts, medical reports or student reports, more varied domain entities form slots in templates, as we observe in our data; hence, existence of a module handling domain specific entities become essential for such a task. (Surdeanu et al., 2003) identify arguments for predicates in a sentence and emphasize how semantic role information may assist in IE related tasks, but their primary focus remained on the extraction of PropBank (Kingsbury et al., 2002) type semantic roles.", "cite_spans": [ { "start": 117, "end": 144, "text": "(Marsh & Perzanowski, 1998;", "ref_id": "BIBREF12" }, { "start": 145, "end": 163, "text": "Onyshkevych, 1994)", "ref_id": "BIBREF13" }, { "start": 168, "end": 183, "text": "ACE (ACE, 2007)", "ref_id": null }, { "start": 316, "end": 330, "text": "(Riloff, 1996)", "ref_id": "BIBREF17" }, { "start": 523, "end": 547, "text": "(Yangarber et al., 2000)", "ref_id": "BIBREF19" }, { "start": 770, "end": 785, "text": "(Collier, 1998)", "ref_id": "BIBREF3" }, { "start": 957, "end": 969, "text": "(Luhn, 1958)", "ref_id": "BIBREF11" }, { "start": 1211, "end": 1234, "text": "(Filatova et al., 2006)", "ref_id": "BIBREF6" }, { "start": 2260, "end": 2283, "text": "(Surdeanu et al., 2003)", "ref_id": "BIBREF18" }, { "start": 2471, "end": 2495, "text": "(Kingsbury et al., 2002)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To our knowledge, the ROUGE metric has not been used for automatic extraction of templates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 The Data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since our focus is on creating summary items from events or structured data rather than from text, we used a corpus from the domain of weather forecasts (Reiter et al., 2005) . This is a freely available parallel corpus 1 consisting of weather data and human written forecasts describing them. The dataset showed regularity in sentence structure and belonged to a closed domain, making the variations in surface forms more constrained than completely free text. After sentence segmentation we arrived at a set of 3262 sentences. From this set, we selected 3000 for template extraction and kept aside 262 sentences for testing.", "cite_spans": [ { "start": 153, "end": 174, "text": "(Reiter et al., 2005)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data Description", "sec_num": "3.1" }, { "text": "For semantic analysis, we used the ASSERT toolkit (Pradhan et al., 2004 ) that produces shallow semantic parses using the PropBank conventions. As a by product, it also produces syntactic parses of sentences, using the Charniak parser (Charniak, 2001) . For each sentence, we maintained a part-of-speech tagged (leaves of the parse tree), parsed, baseNP 2 tagged and semantic role tagged version. The baseNPs were retrieved by pruning the parse trees and not by using a separate NP chunker. The reason for having a baseNP tagged corpus will become clear as we go into the detail of our template extraction techniques. Figure 1 shows a typical output from the Charniak parser and Figure 2 shows the same tree with nodes under the baseNPs pruned.", "cite_spans": [ { "start": 50, "end": 71, "text": "(Pradhan et al., 2004", "ref_id": "BIBREF15" }, { "start": 235, "end": 251, "text": "(Charniak, 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 618, "end": 626, "text": "Figure 1", "ref_id": null }, { "start": 679, "end": 687, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.2" }, { "text": "We identified the need to have a domain entity tagger for matching constituents in the sentences. Any tagger for named entities was not suitable for weather forecasts since unique constituent types assumed significance unlike newswire data. Since the development of such a tagger was beyond the scope of the present work, we developed a module that took baseNP tagged sentences as input and produced tags across words and baseNPs that were domain entities. The development of such a module by hand was easy because of a limited vocabulary (< 1000 words) of the data and the closed set nature of most entity types (e.g the direction entity could take up a finite set of values). From inspection, thirteen distinct entity types were recognized in the domain. Figure 3 shows an example output from the entity recognizer with the sentence from Figure 2 as input.", "cite_spans": [], "ref_spans": [ { "start": 757, "end": 765, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 840, "end": 848, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.2" }, { "text": "[ We now provide a detailed description of our clustering and template extraction algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.2" }, { "text": "We adopted two parallel approaches. First, we investigated a predicate-argument based approach where we consider the set of all propositions in our dataset, and cluster them based on their verb subcategorization frame. Second, we used ROUGE, a summarization evaluation metric that is generally used to compare machine generated and human written summaries. We uniquely used this metric for clustering similar summary items, after abstracting the surface forms to a representation that facilitates comparison of a pair of sentences. The following subsections detail both the techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach and Experiments", "sec_num": "4" }, { "text": "Analysis of predicate-argument structures seemed appropriate for template extraction for a few reasons: Firstly, complicated sentences with multiple verbs are broken down into propositions by a semantic role labeler. The propositions 3 are better generalizable units than whole sentences across a corpus. Secondly, long distance dependencies of constituents with a particular verb, are captured well by a semantic role labeler. Finally, if verbs are considered to be the center of events, then groups of sentences with the same semantic role sequences seemed to form clusters conveying similar meaning. We explain the complete algorithm for template extraction in the following subsections. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Predicate-Argument Based Approach", "sec_num": "4.1" }, { "text": "We performed a verb based clustering as the first step. Instead of considering a unique set of verbs, we considered related verbs as a single verb type. The relatedness of verbs was derived from Wordnet (Fellbaum, 1998) , by merging verbs that appear in the same synset. This kind of clustering is not ideal in a corpus containing a huge variation in event streams, like newswire. However, the results were good for the weather domain where the number of verbs used is limited. The grouping procedure resulted in a set of 82 clusters with 6632 propositions.", "cite_spans": [ { "start": 203, "end": 219, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Verb based clustering", "sec_num": "4.1.1" }, { "text": "Each verb cluster was considered next. Instead of finding structural similarities of the propositions in one go, we first considered the semantic role sequences for each proposition. We searched for propositions that had exactly similar role sequences and grouped them together. To give an example, both sentences 5 and 6 have the matching role sequence ARG0-ARGM-MOD-TARGET-ARGM-DIR. The intuition behind such clustering is straightforward. Propositions with a matching verb type with the same set of roles arranged in a similar fashion would convey similar meaning. We observed that this was indeed true for sentences tagged with correct semantic role labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Matching Role Sequences", "sec_num": "4.1.2" }, { "text": "Instead of considering matching role sequences for a set of propositions, we could as well have considered matching bag of roles. However, for the present corpus, we decided to use strict role sequence instead because of the sentences' rigid structure and absence of any passive sentences. This subclustering step resulted in smaller clusters, and many of them contained a single proposition. We threw out these clusters on the assumption that the human summarizers did not necessarily have a template in mind while writing those summary items. As a result, many verb types were eliminated and only 33 verb-type clusters containing several subclusters each were produced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Matching Role Sequences", "sec_num": "4.1.2" }, { "text": "Groups of propositions with the same verb-type and semantic role sequences were considered in this step. For each group, we looked at individual semantic roles to find out similarity between them. We decided at first to look at syntactic parse tree similarities between constituents. However, there is a need to decide at what level of abstraction should one consider matching the parse trees. After considerable speculation, we decided on pruning the constituents' parse trees till the level of baseNPs and then match the resulting tag sequences. The parses with pruned trees from the preprocessing steps provide the necessary information for constituent matching. Figure 4 shows matching syntactic trees for two ARG0s from two propositions of a cluster. It is at this step that we use the domain entity tags to abstract away the constituents' syntactic tags. Figure 5 shows the constituents of Figure 4 with the tree structure reduced to tag sequences and domain entity types replacing the tags whenever necessary.", "cite_spans": [], "ref_spans": [ { "start": 666, "end": 674, "text": "Figure 4", "ref_id": null }, { "start": 861, "end": 869, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 896, "end": 904, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Looking inside Roles", "sec_num": "4.1.3" }, { "text": "This abstraction step produces a number of unique domain entity augmented tag sequences for a particular semantic role. As a final step of template generation, we concatenate these abstracted constituent types for all the semantic roles in the given group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looking inside Roles", "sec_num": "4.1.3" }, { "text": "To focus on template-like structures we only consider tag sequences that occur twice or more in the group.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looking inside Roles", "sec_num": "4.1.3" }, { "text": "The templates produced at the end of this step are essentially tag sequences interspersed with domain entities. In our definition of templates, the slots are the entity types and the fixed parts are constituted by word(s) used by the human experts for a particular tag sequence. Figure 6 shows some example templates. The upper case words in the figure correspond to the domain entities identified by the entity tagger and they form the slots in the templates. A total of 209 templates were produced.", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 287, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Looking inside Roles", "sec_num": "4.1.3" }, { "text": "PRESSURE_ENTITY to DIRECTION of LOCATION will drift slowly WAVE will run_0.5/move_0.5 DIRECTION then DIRECTION Associated PRESSURE_ENTITY will move DIRECTION across LOCATION TIME PRESSURE_ENTITY expected over LOCATION by_0.5/on_0.5 DAY Figure 6 : Example Templates. Upper case tokens correspond to slots. For fixed parts, when there is a choice between words, the probability of the occurrence of words in that particular syntactic structure are tagged alongside.", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 244, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Looking inside Roles", "sec_num": "4.1.3" }, { "text": "ROUGE (Lin, 2004) is the standard automatic evaluation metric in the Summarization community. It is derived from the BLEU (Papineni et al., 2001 ) score which is the evaluation metric used in the Machine Translation community. The underlying idea in the metric is comparing the candidate and the reference sentences (or summaries) based on their token co-occurrence statistics. For example, a unigram based measure would compare the vocabulary overlap between the candidate and reference sentences. Thus, intuitively, we may use the ROUGE score as a measure for clustering the sentences. Amongst the various ROUGE statistics, the most appealing is Weighted Longest Common Subsequence(WLCS). WLCS favors contiguous LCS which corresponds to the intuition of finding the common template. We experimented with other ROUGE statistics but we got better and easily interpretable results using WLCS and so we chose it as the final metric. In all the approaches the data was first preprocessed (baseNP and NE tagged) as described in the previous subsection. In the following subsections, we describe the various clustering techniques that we tried using the ROUGE score followed by the alignment technique.", "cite_spans": [ { "start": 6, "end": 17, "text": "(Lin, 2004)", "ref_id": "BIBREF10" }, { "start": 122, "end": 144, "text": "(Papineni et al., 2001", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "A ROUGE Based Approach", "sec_num": "4.2" }, { "text": "Unsupervised Clustering: As the ROUGE score defines a distance metric, we can use this score for doing unsupervised clustering. We tried hierarchical clustering approaches but did not obtain good clusters, evaluated empirically. In empirical evaluation, we manually looked at the output clusters and made a judgement call whether the candidate clusters are reasonably coherent and potentially correspond to templates. The reason for the poor performance of the approach was the classical parameter estimation problem of determining a priori the number of clusters. We could not find an elegant solution for the problem without losing the motivation of an automated approach. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering", "sec_num": "4.2.1" }, { "text": "Since the unsupervised technique did not give good results, we experimented with a nonparametric clustering approach, namely, Cross-Association (Chakrabarti et al., 2004) . It is a non-parametric unsupervised clustering algorithm for similarity (boolean) matrices. We obtain the similarity matrix in our domain by thresholding the ROUGE similarity score matrix. This technique also did not give us good clusters, evaluated empirically. The plausible reason for the poor performance seems to be that the technique is based on MDL (Minimum Description Length) principle. Since in our domain we expect a large number of clusters with small membership along many singletons, MDL principle is not likely to perform well.", "cite_spans": [ { "start": 144, "end": 170, "text": "(Chakrabarti et al., 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Non-parametric Unsupervised Clustering:", "sec_num": null }, { "text": "As the unsupervised techniques did not perform well, we tried deterministic clustering based on graph connectivity. The underlying intuition is that all the sentences X 1...n that are \"similar\" to any other sentence Y i should be in the same cluster even though X j and X k may not be \"similar\" to each other. Thus we find the connected components in the similarity matrix and label them as individual clusters. 4 We created a similarity matrix by thresholding the ROUGE score. In the event, the clusters obtained by this approach were also not good, evaluated empirically. This led us to revisit the similarity function and tune it. We factored the ROUGE-WLCS score, which is an F-measure score, into its component Precision and Recall scores and experimented with various combinations of using the Precision and Recall scores. We finally chose a combined Precision and Recall measure (not f-measure) in which both the scores were independently thresholded. The motivation for the measure is that in our domain we desire to have high precision matches. Additionally we need to control the length of the sentences in the cluster for which we require a Recall threshold. Fmeasure (which is the harmonic mean of Precision and Recall) does not give us the required individual control. We set up our experiments such that while comparing two sentences the longer sentence is always treated as the reference and the shorter one as the candidate. This helps us in interpreting the Precision/Recall measures better and thresholding them accordingly. The approach gave us 149 clusters, which looked good on empirical evaluation. We can argue that using this modified similarity function for previous unsupervised approaches could have given better results, but we did not reevaluate those approaches as our aim of getting a reasonable clustering approach is fulfilled with this simple scheme and tuning the unsupervised approaches can be interesting future work.", "cite_spans": [ { "start": 412, "end": 413, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Clustering:", "sec_num": null }, { "text": "After obtaining the clusters using the Deterministic approach we needed to find out the template corresponding to each of the cluster. Fairly intuitively we computed the Longest Common Subsequence(LCS) between the sentences in each cluster which we then claim to be the template corresponding to the cluster. This resulted in a set of 149 templates, similar to the Predicate Argument based approach, as shown in figure 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment", "sec_num": "4.3" }, { "text": "Since there is no standard way to evaluate template extraction for summary creation, we adopted a mix of subjective and automatic measures for evaluating the templates extracted. We define precision for this particular problem as: precision = number of domain relevant templates total number of extracted templates This is a subjective measure and we undertook a study involving three subjects who were accustomed to the language used in the corpus. We asked the human subjects to mark each template as relevant or non-relevant to the weather forecast domain. We also asked them to mark the template as grammatical or ungrammatical if it is non-relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Scheme", "sec_num": "5.1" }, { "text": "Our other metric for evaluation is automatic recall. It is based on using the ROUGE-WLCS metric to determine a match between the preprocessed (baseNP and NE tagged) test corpora with the proposed set of correct templates, a set determined by taking an intersection of only the relevant templates marked by each judge. For the ROUGE based method, the test corpus consists of 262 sentences, while for the predicate-argument based method it consists of a set of 263 propositions extracted from the 262 sentences using ASSERT followed by a filtering of invalid propositions (e.g. ones starting with a verb). Amongst different ROUGE scores (precision/recall/f-measure), we consider precision as the criterion for deciding a match and experimented with different thresholding values. Table 1 shows the precision values for top 10 most frequently occurring verbs. (Since a major proportion (> 90%) of the templates are covered by these verbs, we don't show all the precision values; it also helps to contain space.) The overall precision value achieved was 84.21%, the inter-rater Fleiss' kappa measure (Fleiss, 1971) between the judges being \u03ba = 0.69, demonstrating substantial agreement. The precision values are encouraging, and in most cases the reason for low precision is because of erroneous performance of the semantic role labeler system, which is corroborated by the percentage (47.47%) of ungrammatical templates among the irrelevant ones. Results for the automated recall values are shown in Figure 8 , where precision values are varied to observe the recall. For 0.9 precision in ROUGE-WLCS, the recall is 0.3 which shows that there is a 30% near exact coverage over propositions, while for 0.6 precision in ROUGE-WLCS, the recall is an encouraging 81%. ", "cite_spans": [ { "start": 1096, "end": 1110, "text": "(Fleiss, 1971)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 778, "end": 785, "text": "Table 1", "ref_id": null }, { "start": 1497, "end": 1505, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Evaluation Scheme", "sec_num": "5.1" }, { "text": "Various precision and recall thresholds for ROUGE were considered for clustering. We empirically settled on a recall threshold of 0.8 since this produces the set of clusters with optimum number of sentences. The number of clusters and number of sentences in clusters at this recall values are shown in Figure 9 for various precision thresholds. Precision was measured in the same way as the predicate argument approach and the value obtained was 76.3%, with Fleiss' kappa measure of \u03ba = 0.79. The percentage of ungrammatical templates among the irrelevant ones was 96.7%, strongly indicating that post processing the templates using a parser can, in future, give substantial improvement. During error analysis, we observed simple grammatical errors in templates; first or last word being preposi- tions. So a fairly simple error recovery module that strips the leading and trailing prepositions was introduced. 20 templates out of the 149 were modified by the error recovery module and they were evaluated again by the three judges. The precision obtained for the modified templates was 35%, with Fleiss' kappa \u03ba = 1, boosting the overall precision to 80.98%. The overall high precision is motivating as this is a fairly general approach that does not require any NLP resources. Figure 8 shows the automated recall values for the templates and abstracted sentences from the held-out dataset. For high precision points, the recall is low because there is not an exact match for most cases.", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 310, "text": "Figure 9", "ref_id": "FIGREF6" }, { "start": 1279, "end": 1287, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Results: ROUGE based approach", "sec_num": "5.3" }, { "text": "In this paper, we described two new approaches for template extraction for briefing generation. For both approaches, high precision values indicate that meaningful templates are being extracted. However, the recall values were moderate and they hint at possible improvements. An interesting direction of future research is merging the two approaches and have one technique benefit from the other. The approaches seem complementary as the ROUGE based technique does not use the structure of the sentence at all whereas the predicate-argument approach is heavily dependent on it. Moreover, the predicate argument based approach gives general templates with one predicate while ROUGE based approach can extract templates containing multiple verbs. It would also be desirable to establish the generality of the techniques, by using other domains such as newswire, medical reports and others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://www.csd.abdn.ac.uk/research/sumtime/ 2 A baseNP is a noun-phrase with no internal noun-phrase", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "sentence fragments with one verb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This approach is similar to agglomerative single linkage clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to express our gratitude to William Cohen and Noah Smith for their valuable suggestions and inputs during the course of this work. We also thank the three anonymous reviewers for helpful suggestions. This work was supported by DARPA grant NBCHD030010. The content of the information in this publication does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic content extraction program", "authors": [], "year": 2007, "venue": "ACE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACE (2007). Automatic content extraction program. http://www.nist.gov/speech/tests/ace/.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fully automatic crossassociations", "authors": [ { "first": "D", "middle": [], "last": "Chakrabarti", "suffix": "" }, { "first": "S", "middle": [], "last": "Papadimitriou", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Modha", "suffix": "" }, { "first": "C", "middle": [], "last": "Faloutsos", "suffix": "" } ], "year": 2004, "venue": "Proceedings of KDD '04", "volume": "", "issue": "", "pages": "79--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chakrabarti, D., Papadimitriou, S., Modha, D. S., & Faloutsos, C. (2004). Fully automatic cross- associations. Proceedings of KDD '04 (pp. 79- 88). New York, NY, USA: ACM Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Immediate-head parsing for language models", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ACL '01", "volume": "", "issue": "", "pages": "116--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. (2001). Immediate-head parsing for language models. Proceedings of ACL '01 (pp. 116-123).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic template creation for information extraction. Doctoral dissertation", "authors": [ { "first": "R", "middle": [], "last": "Collier", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collier, R. (1998). Automatic template creation for information extraction. Doctoral dissertation, University of Sheffield.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Customization in a unified framework for summarizing medical literature", "authors": [ { "first": "N", "middle": [], "last": "Elhadad", "suffix": "" }, { "first": "M.-Y", "middle": [], "last": "Kan", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Klavans", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2005, "venue": "Artificial Intelligence in Medicine", "volume": "33", "issue": "", "pages": "179--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elhadad, N., Kan, M.-Y., Klavans, J. L., & McKe- own, K. (2005). Customization in a unified frame- work for summarizing medical literature. Artifi- cial Intelligence in Medicine, 33, 179-198.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet -An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). WordNet -An Electronic Lex- ical Database. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic creation of domain templates", "authors": [ { "first": "E", "middle": [], "last": "Filatova", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING/ACL", "volume": "", "issue": "", "pages": "207--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filatova, E., Hatzivassiloglou, V., & McKeown, K. (2006). Automatic creation of domain tem- plates. Proceedings of COLING/ACL 2006 (pp. 207-214).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "J", "middle": [], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "", "issue": "", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fleiss, J. (1971). Measuring nominal scale agree- ment among many raters. Psychological Bulletin (pp. 378-382).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adding semantic annotation to the penn treebank", "authors": [ { "first": "P", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the HLT'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingsbury, P., Palmer, M., & Marcus, M. (2002). Adding semantic annotation to the penn treebank. Proceedings of the HLT'02.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning from the report-writing behavior of individuals", "authors": [ { "first": "M", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "N", "middle": [], "last": "Garera", "suffix": "" }, { "first": "A", "middle": [ "I" ], "last": "Rudnicky", "suffix": "" } ], "year": 2007, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1641--1646", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, M., Garera, N., & Rudnicky, A. I. (2007). Learning from the report-writing behavior of in- dividuals. IJCAI (pp. 1641-1646).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "C.-Y", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Workshop on Text Summarization", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C.-Y. (2004). ROUGE: A package for auto- matic evaluation of summaries. Proceedings of Workshop on Text Summarization.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The automatic creation of literature abstracts", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Luhn", "suffix": "" } ], "year": 1958, "venue": "IBM Journal of Research Development", "volume": "2", "issue": "", "pages": "159--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luhn, H. P. (1958). The automatic creation of litera- ture abstracts. IBM Journal of Research Develop- ment, 2, 159-165.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "MUC-7 Evaluation of IE Technology: Overview of Results", "authors": [ { "first": "E", "middle": [], "last": "Marsh", "suffix": "" }, { "first": "D", "middle": [], "last": "Perzanowski", "suffix": "" } ], "year": 1998, "venue": "Proceedings of MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marsh, E., & Perzanowski, D. (1998). MUC-7 Eval- uation of IE Technology: Overview of Results. Proceedings of MUC-7. Fairfax, Virginia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Issues and methodology for template design for information extraction", "authors": [ { "first": "B", "middle": [], "last": "Onyshkevych", "suffix": "" } ], "year": 1994, "venue": "Proceedings of HLT '94", "volume": "", "issue": "", "pages": "171--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Onyshkevych, B. (1994). Issues and methodology for template design for information extraction. Proceedings of HLT '94 (pp. 171-176). Morris- town, NJ, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T., & Zhu, W. (2001). Bleu: a method for automatic evaluation of machine translation.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Shallow semantic parsing using support vector machines", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "W", "middle": [], "last": "Ward", "suffix": "" }, { "first": "K", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT/NAACL '04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S., Ward, W., Hacioglu, K., Martin, J., & Jurafsky, D. (2004). Shallow semantic parsing using support vector machines. Proceedings of HLT/NAACL '04. Boston, MA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Choosing words in computer-generated weather forecasts", "authors": [ { "first": "E", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "S", "middle": [], "last": "Sripada", "suffix": "" }, { "first": "J", "middle": [], "last": "Hunter", "suffix": "" }, { "first": "J", "middle": [], "last": "Yu", "suffix": "" }, { "first": "I", "middle": [], "last": "Davy", "suffix": "" } ], "year": 2005, "venue": "Artif. Intell", "volume": "167", "issue": "", "pages": "137--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reiter, E., Sripada, S., Hunter, J., Yu, J., & Davy, I. (2005). Choosing words in computer-generated weather forecasts. Artif. Intell., 167, 137-169.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatically generating extraction patterns from untagged text", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 1996, "venue": "AAAI/IAAI", "volume": "2", "issue": "", "pages": "1044--1049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riloff, E. (1996). Automatically generating extrac- tion patterns from untagged text. AAAI/IAAI, Vol. 2 (pp. 1044-1049).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Using predicate-argument structures for information extraction", "authors": [ { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "J", "middle": [], "last": "Williams", "suffix": "" }, { "first": "P", "middle": [], "last": "Aarseth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Surdeanu, M., Harabagiu, S., Williams, J., & Aarseth, P. (2003). Using predicate-argument structures for information extraction. Proceedings of ACL 2003.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Automatic acquisition of domain knowledge for information extraction", "authors": [ { "first": "R", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "S", "middle": [], "last": "Huttunen", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th conference on Computational linguistics", "volume": "", "issue": "", "pages": "940--946", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangarber, R., Grishman, R., Tapanainen, P., & Hut- tunen, S. (2000). Automatic acquisition of domain knowledge for information extraction. Proceed- ings of the 18th conference on Computational lin- guistics (pp. 940-946). Morristown, NJ, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Parse tree for a sentence in the data. Pruned parse tree for a sentence in the corpus", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Example output of the entity recognizer", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "(5) [ARG0 A low over the Norwegian Sea] [AGM-MOD will] [TARGET move ] [ARGM-DIR North ] and weaken (6) [ARG0 A high pressure area ] [AGM-MOD will ] [TARGET move] [ARGM-DIR southwestwards] and build on Sunday.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "Abstracted tag sequences for two constituents", "uris": null }, "FIGREF4": { "type_str": "figure", "num": null, "text": "Deterministic clustering based on Graph connectivity. In the figure the squares with the same pattern belong to the same cluster.", "uris": null }, "FIGREF5": { "type_str": "figure", "num": null, "text": "Automated Recall based on ROUGE-WLCS measure comparing the test corpora with the set of templates extracted by the Predicate-Argument (SRL) and the ROUGE based method.", "uris": null }, "FIGREF6": { "type_str": "figure", "num": null, "text": "Number of clusters and total number of sentences in clusters for various Precision Thresholds at Recall Threshold=0.8", "uris": null }, "TABREF1": { "num": null, "text": "(4) [PRESSURE ENTITY] from [LOCATION] to [LOCATION] will move [DIRECTION] across [LOCATION] [TIME]", "type_str": "table", "html": null, "content": "" } } } }