Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "M91-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:30.174030Z"
},
"title": "SRI INTERNATIONAL : DESCRIPTION OF THE TACITUS SYSTE M AS USEI) FOR MUC-3",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobb",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Appelt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
},
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
},
{
"first": "Jerry",
"middle": [],
"last": "Hobbs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": "hobbs@ai.sri.com"
},
{
"first": "David",
"middle": [],
"last": "Magerman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
},
{
"first": "An",
"middle": [
"N"
],
"last": "Podlozny",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stickel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International Menlo Park",
"location": {
"postCode": "9402 5",
"region": "California"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "BACKGROUN D TACITUS is a system for interpreting natural language texts that has been under development sinc e 1985. It has a preprocessor and postprocessor currently tailored to the MUC-3 application. It perform s a syntactic analysis of the sentences in the text, using a fairly complete grammar of English, producing a logical form in first-order predicate calculus. Pragmatics problems are solved by abductive inference in a pragmatics, or interpretation, component. The original purpose of TACITUS was to aid us in investigating the problems of inferencing in natura l language. For that reason, the system employed a straight-line modularization, with syntactic analysis bein g done by the already-developed DIALOGIC parser and grammar ; only the correct parse was chosen an d passed on the the inferencing component. With the discovery of the abduction framework in 1987 [1], we realized that the proper way to deal wit h syntax-pragmatics interactions was in a unified abductive framework. However, the overhead in implementin g such an approach at the level of coverage that the DIALOGIC system already provided would have bee n enormous, so that effort was not pursued, and we continued to focus on pragmatics problems. When we began to participate in the MUC-2 and MUC-3 evaluations, we could no longer chose manuall y which syntactic analysis to process, so we began to invest more effort in the implementation of heuristics for choosing the right parse. We do not view this as the ideal way of handling syntax-pragmatics interactions , but, on the other hand, it has forced us into the development of these heuristics to a point of remarkable success, as an analysis of our results in the latest evaluation demonstrate. We developed a preprocessor for MUC-2 and modified it for MUC-3. Our relevance filter was develope d for MUC-3, as was our current template-generation component. Those involved in the MUC-3 effort were",
"pdf_parse": {
"paper_id": "M91-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "BACKGROUN D TACITUS is a system for interpreting natural language texts that has been under development sinc e 1985. It has a preprocessor and postprocessor currently tailored to the MUC-3 application. It perform s a syntactic analysis of the sentences in the text, using a fairly complete grammar of English, producing a logical form in first-order predicate calculus. Pragmatics problems are solved by abductive inference in a pragmatics, or interpretation, component. The original purpose of TACITUS was to aid us in investigating the problems of inferencing in natura l language. For that reason, the system employed a straight-line modularization, with syntactic analysis bein g done by the already-developed DIALOGIC parser and grammar ; only the correct parse was chosen an d passed on the the inferencing component. With the discovery of the abduction framework in 1987 [1], we realized that the proper way to deal wit h syntax-pragmatics interactions was in a unified abductive framework. However, the overhead in implementin g such an approach at the level of coverage that the DIALOGIC system already provided would have bee n enormous, so that effort was not pursued, and we continued to focus on pragmatics problems. When we began to participate in the MUC-2 and MUC-3 evaluations, we could no longer chose manuall y which syntactic analysis to process, so we began to invest more effort in the implementation of heuristics for choosing the right parse. We do not view this as the ideal way of handling syntax-pragmatics interactions , but, on the other hand, it has forced us into the development of these heuristics to a point of remarkable success, as an analysis of our results in the latest evaluation demonstrate. We developed a preprocessor for MUC-2 and modified it for MUC-3. Our relevance filter was develope d for MUC-3, as was our current template-generation component. Those involved in the MUC-3 effort were",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The system has six modules . As we describe them, their performance on Message 99 of TST1 will b e described in detail, especially their performance on the first two sentences . Then their performance on th e first 20 messages of TST2 will be summarized .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MODULES OF THE SYSTE M",
"sec_num": null
},
{
"text": "This component regularizes the expression of certain phenomena, such as dates, times, and punctuation . In addition, it decides what to do with unknown words . There are three choices, and these are applie d sequentially .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocesso r",
"sec_num": null
},
{
"text": "1. Spelling Correction . This is applied only to words longer than five letters .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocesso r",
"sec_num": null
},
{
"text": "and English words was developed and is used to assign the category Last-Name to some of the word s that are not spell-corrected .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "3 . Morphological Category Assignment . Words that are not spell-corrected or classified as last names, ar e assigned a category on the basis of morphology . Words ending in \"-ing\" or \"-ed\" are classified as verbs . Words ending in \"-ly\" are classified as adverbs . All other unknown words are taken to be nouns . This misses adjectives entirely, but this is generally harmless, because the adjectives incorrectly classifie d as nouns will still parse as prenominal nouns in compound nominals . The grammar will recognize an unknown noun as a name in the proper environment .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "There were no unknown words in Message 99, since all the words used in the TST1 set had been entere d into the lexicon .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "In the first 20 messages of TST2, there were 92 unknown words . Each of the heuristics either did or di d not apply to the word . If it did, the results could have been correct, harmless, or wrong . An example of a harmless spelling correction is \"twin-engined \" to the adjective \"twin-engine \" . A wrong spelling correctio n is the verb \"nears\" to the preposition \"near\" . An example of a harmless assignment of Hispanic surname to a word is the Japanese name \"Akihito\" . A wrong assignment is the word \"panorama \" . A harmles s morphological assignment of a category to a word is the assignment of Verb to \"undispute \" and \"originat\" . A wrong assignment is the assignment of Noun to \"upriver\" .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "The results were as follows : If we look only at the Correct column, only the morphological assignment heuristic is at all effective , giving us 62%, as opposed to 32% for spelling correction and 40% for Hispanic surname assignment . However , Harmless assignments are often much better than merely harmless ; they often allow a sentence to parse tha t otherwise would not . If we count both the Correct and Harmless columns, then spelling correction is effectiv e 80% of the time, Hispanic surname assignment 90% of the time, and morphological assignment 86% .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "Unknown Applied",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "Using the three heuristics in tandem meant that 85% of the unknown words were handled either correctl y or harmlessly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hispanic Name Recognition . A statistical trigram model for distinguishing between Hispanic surname s",
"sec_num": "2."
},
{
"text": "This component works on a sentence-by-sentence basis and decides whether the sentence should b e submitted to further processing . It consists of two subcomponents .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "1. Statistical Relevance Filter . We went through the 1300-text development set and identified the relevan t sentences . We then developed a unigram, bigram, and trigram statistical model for relevance on the basis of this data . We chose our cutoffs so that we would identify 85% of the relevant sentences an d overgenerate by no more than 300% . The component is now apparently much better than this .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "2. Keyword Antifilter . In an effort to capture those sentences that slip through the statistical relevanc e filter, we developed an antifilter based on certain keywords . If a sentence in the text proves to contai n relevant information, the next few sentences will be declared relevant as well if they contain certain keywords .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "In Message 99, the statistical filter determined 9 sentences to be relevant . All of them were relevan t except for one, Sentence 13 . No relevant sentences were missed . The keyword antifilter decided incorrectl y that two other sentences were relevant, Sentences 8 and 9 . This behavior is typical .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "In the first 20 messages of the TST2 set, the results were as follows : There were 370 sentences . Th e statistical relevance filter produced the following results : Actually Actuall y Relevant Irrelevan t Judged 42 3 3 Relevan t Judged 9 28 6 Irrelevan t Thus, recall was 82% . Precision was 56% . These results are excellent . They mean that, using this filter alone, we would have processed only 20% of the sentences in the corpus, processing less than twice as man y as were actually relevant, and only missing 18% of the relevant sentences .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "The results of the keyword antifilter were as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "Actually Relevant Actuall y Irrelevant Judged 5 5 7 Relevan t Judged 4 22 7 Irrelevant Clearly, the results here are not nearly as good . Recall was 55% and precision was 8% . This means that to capture half the remaining relevant sentences, we had to nearly triple the number of irrelevant sentences w e processed . Using the filter and antifilter in tandem, we had to process 37% of the sentences . The conclusio n is that if the keyword antifilter is to be retained, it must be refined considerably .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "Incidentally, of the four relevant sentences that escaped both the filter and the antifilter, two containe d only redundant information that could have been picked up elsewhere in the text . The other two containe d information essential to 10 slots in templates .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevance Filte r",
"sec_num": null
},
{
"text": "The sentences that are declared relevant are parsed and translated into logical form . This is done using the DIALOGIC system, developed in 1980-1 essentially by constructing the union of the Linguistic Strin g Project Grammar and the DIAGRAM grammar which grew out of SRI ' s Speech Understanding Syste m research in the 1970s . Since that time it has been considerably enhanced . It consists of about 160 phras e structure rules . Associated with each rule is a \"constructor\" expressing the constraints on the applicabilit y of that rule, and a \"translator\" for producing the logical form .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "The parser used by the system is a recently developed agenda-based scheduling chart-parser . As nodes and edges are built, they are rated and only a certain number of them are retained for further parsing . Thi s number is a parameter the user can set . The nodes and edges are rated on the basis of their scores from the preference heuristics . Prior to November 1990, we used a simple, exhaustive, bottom-up parser, wit h the result that sentences of more 15 or 20 words could not be parsed . The use of the scheduling parser ha s made it feasible to parse sentences of up to 60 words .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "For sentences of longer than 60 words and for faster, though less accurate, parsing of shorter sentences, w e developed a technique we are calling \"terminal substring parsing\" . The sentence is segmented into substrings , by breaking it at commas, conjunctions, relative pronouns, and certain instances of the word \"that\" . Th e substrings are then parsed, starting with the last one and working hack . For each substring, we try either to parse the substring itself as one of several categories or to parse the entire set of substrings parsed so far a s one of those categories . The best such structure is selected, and for subsequent processing, that is the onl y analysis of that portion of the sentence allowed . The categories that we look for include main, subordinate , and relative clauses, infinitives, verb phrases, prepositional phrases, and noun phrases . The effect of thi s technique is to give only short \"sentences\" to the parser, without losing the possibility of getting a singl e parse for the entire long sentence . Suppose a sixty-word sentence is broken into six ten-word substrings . Then the parsing, instead of taking on the order of 60 3 in time, will only take on the order of 6 * 15 3 . (Whe n parsing the initial 10-word substring, we are in effect parsing at most a 15-\" word\" string covering the entir e sentence, consisting of the 10 words plus the nonterminal symbols covering the best analyses of the othe r five substrings .)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "When sentences do not parse, we attempt to span it with the longest, best sequence of interpretabl e fragments . The fragments we look for are main clauses, verb phrases, adverbial phrases, and noun phrases . They are chosen on the basis of length and their preference scores . We do not attempt to find fragments fo r strings of less than five words . The effect of this heuristic is that even for sentences that do not parse, w e are able to extract 88% of the propositional content .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "The parse tree is translated into a logical form that regularizes to some extent the role assignments i n the predicate-argument structure . For example, for a word like \"break\", if the usage contains only a subject , it is taken to be the Patient, while if it contains a subject and object, they are taken to be the Agent an d Patient respectively . Arguments inherited from control verbs are handled here as well .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "Our lexicon includes about 12,000 entries, including about 2000 personal names and about 2000 location , organization, or other names . This does not include morphological variants, which are dealt with in a separate morphological analyzer .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "In Message 99, of the 11 sentences determined to be relevant, only Sentence 14 did not parse . This was due to a mistake in the sentence itself, the use of \"least\" instead of \"at least\" . Hence, the best fragment sequence was sought . This consisted of the two fragments \"The attacks today come after Shining Path attacks\" and \"10 buses were burned throughout Lima on 24 Oct .\" The parses for both these fragments were completely correct . Thus, the only information lost was from the three words \"during which least\" .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "Of the 10 sentences that parsed, 5 were completely correct, including the longest, Sentence 7 (27 word s in 77 seconds) . There were three mistakes (Sentences 3, 4, and 9) in which the preferred multiword senses o f the phrases \"in front of\" and \"Shining Path\" lost out to their decompositions . There were two attachment mistakes . In Sentence 3 the relative clause was incorrectly attached to \"front\" instead of \"embassy\", and i n Sentence 8, \"in Peru\" was attached to \"attacked\" instead of \"interests\" . All of these errors were harmless . I n addition, in Sentence 5, \"and destroyed the two vehicles\" was grouped with \"Police said . . .\" instead of \"th e bomb broke windows\" ; this error is not harmless . In every case the grammar prefers the correct reading . We believe the mistakes were due to a problem in the scheduling parser that we discovered the week of th e evaluation but felt was too deep and far-reaching to attempt to fix at that point .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "In the first 20 messages of TST2, 131 sentences were given to the normal parser (as opposed to th e terminal substring parser) . A parse was produced for 81 of the 131, or 62% . Of these, 43 (or 33%) wer e completely correct . 30 more had three or fewer errors . Thus, 56% of the sentences were parsed correctly o r nearly correctly .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "These results naturally vary depending on the length of the sentences . There were 64 sentences of under 30 morphemes . Of these, 37 (58%) had completely correct parses and 48 (75%) had three or fewer errors . The normal parser attempted only 8 sentences of more than 50 morphemes, and only two of these parsed , neither of them even nearly correctly .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "Of the 44 sentences that would not parse, 9 were due to problems in lexical entries . 18 were due t o shortcomings in the grammar . 6 were due to garbled text . The causes of 11 failures to parse have not bee n determined . These errors are spread out evenly across sentence lengths . In addition, 7 sentences of over 3 0 morphemes hit the time limit we had set, and terminal substring parsing was invoked .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "The shortcomings in the grammar were the following constructions, which are not currently covered :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "which Adverbial V P Subordinate-Conjunction Adverbial S as V P the next few days more Noun to X than to Y NP and, Adverb, NP (this is handled without the commas ) of how S Adverb or Adver b (NP, NP ) Verb -Adverbial -N P Infinitive and Infinitiv e S (containing the word \"following\") : NPConjunctio n PP is N P be as S/NP no longer cut short N P Our results in syntactic analysis are quite encouraging since they show that a high proportion of a corpu s of long and very complex sentences can be parsed nearly correctly . However, the situation is even better when one considers the results for the best-fragment-sequence heuristic and for terminal substring parsing . A best sequence of fragments was sought for the 44 sentences that did not parse for reasons other tha n timing . A sequence was found for 41 of these . The average number of fragments in a sequence was two . Thi s means that an average of only one structural relationship was lost . Moreover, the fragments covered 88% o f the morphemes . That is, even in the case of failed parses, 88% of the propositional content of the sentence s was made available to pragmatics .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "For 37% of these sentences, correct syntactic analyses of the fragments were produced . For 74%, the analyses contained three or fewer errors . Correctness did not correlate with length of sentence .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "Terminal substring parsing was applied to 14 sentences ; ranging from 34 to 81 morphemes in length . Only one of these parsed, and that parse was not good . This is not surprising, given that this technique i s called only when all else has already failed . Sequences of fragments were found for all the other 13 sentences . The average number of fragments was 2 .6, and the sequences covered 80% of the morphemes . None of th e fragment sequences was without errors . However, eight of the 13 had three or fewer mistakes .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "We have found all of this extremely encouraging . Even more encouraging is the fact that a majority o f the errors in parsing can be attributed to five or six causes . Two prominent ones are the tendency of th e scheduling parser to lose favored close attachments of conjuncts and adjuncts near the end of sentences, an d the tendency to misanalyze the strin g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "[[Noun Noun] N p Verbtrans NP] s as [Noun]Np [Noun Ve rb ditrans 0 NP]s/Np",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "We believe that many such problems could be solved with a few days work .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Analysi s",
"sec_num": null
},
{
"text": "The literals in the logical form are assigned assumability costs, based on their syntactic role, the predicate s involved, and other factors . They are then passed to the abductive theorem-prover PTTP, which attempts t o find a proof from a knowledge base of axioms about the terrorist domain . The fundamental idea behind this component is that the interpretation of a text is the best explanation for what would make it true . Generally, in this domain, the explanation is one that involves seeing the text as an instance of an \"Interesting Act \" schema, a schema which includes the principal roles in bombings, kidnappings, and so forth . The explanation of a sentence is identified with an abductive proof of its logical form . This proof may include assumptions of unprovable literals, and each assumption incurs a cost . Different proofs are compared according to the cos t of the assumptions they introduce, and the lowest cost proof is taken to be the best explanation, provided that all the assumptions are consistent .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The agents and objects of \" Interesting Acts \" are required to he \"bad guys \" and \" good guys \" respectively. \"Bad guys\" are terrorists, guerrillas, and their organizations, and good guys are civilians, judges, governmen t officials, etc . Members of the armed forces can be \"bad guys\" on certain occasions, but they are never \"goo d guys .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The knowledge base includes a taxonomy of people and objects in the domain . The primary informatio n that is derived from this taxonomy is information about the disjointness of classes of entities . For example , the classes of \"good guys\" and \"bad guys\" are disjoint, and any abductive proof that assumes \"good guy \" and \"bad guy\" of the same entity is inconsistent . To view an attack by guerrillas on regular army troop s as an interesting act would require assuming the victims, i .e . the troops, were \" good guys\" and since the \"good guys\" are inconsistent with the military, no consistent explanation of the event in question in term s of \"Interesting Act \" is possible, and hence no template would be generated for such an incident .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The abductive reasoner attempts to minimize the extensions of most predicates by factoring goals wit h previous assumptions . That means that whenever it is consistent to assume that two individuals that share a property represented by one of the predicates to be minimized are the same, it does so . This factorin g mechanism is the primary mechanism by which anaphora is resolved . Two entities with similar propertie s are generally assumed to be identical . Pronominal anaphora works differently, in that the structure of th e text is taken into account in creating an ordered list of possible antecedents . The abductive reasoner wil l resolve the pronoun with the first object on the antecedent list that leads to a consistent proof .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "Using the factoring mechanism for anaphora resolution requires one to have a rich enough domain theor y so that incorrect resolutions can be eliminated from consideration . Otherwise, the system is strongly biase d toward collapsing everything into a single individual or event . On the other hand, consistency checking can b e computationally hard, and whatever theory is adopted for consistency checking must be fast . Our experienc e has been that the taxonomic consistency check described above is mostly adequate for rejecting incorrec t resolutions, but we have found it necessary to augment the taxonomic check with some other strategie s for determining inconsistency . For example, we reject as inconsistent any proof that assumes that a singl e individual has two distinct proper surnames .' We also assume it is inconsistent to resolve an individual wit h a set, and to resolve two sets that are known to be of different cardinality .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The domain knowledge base is divided into a set of axioms, which are used for abductively proving th e 'sentences from the text, and a class hierarchy, which is used for checking the consistency of the proofs . Th e axioms are divided into a core set of axioms describing the events in the domain that correspond to th e incident types, and lexical axioms, which are meaning postulates that relate the predicate introduced by a lexical item to the core concepts of the domain .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The knowledge base includes approximately 550 axioms at the current stage of development . This breaks down into about 60 axioms expressing the core facts about the schemas of interest, 430 axioms relating lexica l entries to these core schemas, and approximately 60 axioms for resolving compound nominals, of-relations , and possessives . The knowledge base also includes approximately 1100 locations, for which relevant axioms are introduced automatically at run-time .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pragmatics, or Interpretatio n",
"sec_num": null
},
{
"text": "The task of the template generation component is to take the results of the abductive proofs in pragmatics , and put them into a template form according to the specifications of the task . This generates one templat e for every interesting act that is assumed by pragmatics, with several exceptions . An interesting act can b e both an ATTACK and a MURDER, and only the MURDER template would be produced . An interestin g act of type MURDER might be divided into two templates, if it was found that some of the victims survive d the attack . For example \"Terrorists shot John and Mary . John was wounded and Mary was found dead a t the scene,\" would generate one MURDER template and one ATTEMPTED MURDER template .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "For each interesting act, a cluster of contemporaneous and causally related events from the text i s formulated . Any temporal or locative information that is associated with any of these events, or the agent s and objects participating in the events, is used to fill the DATE and LOCATION slots of the respectiv e templates . Each slot is then filled by looking at the arguments of the relevant predicates, and if any of these arguments represent sets, the sets are expanded into their constituents for the slot fills .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "For string fills, proper names are preferred, if any are known, and if not, the longest description fro m all the coreferential variables denoting that entity is used, excluding certain uninformative descriptors lik e \"casualties . \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "In a final pass, analysis eliminates from consideration templates that do not pass certain coherence o r relevance filters . For example, any template that has a \"bad guy\" as the object of an attack is rejected , since this is probably a result of an error in solving some pragmatics problem . Templates for events tha t take place in the distant past are rejected, as well as events that take place repeatedly or over vague tim e spans (e .g . \"in the last three weeks\") . Finally, templates for events that take place in irrelevant countrie s are eliminated . This final filter, unfortunately, can eliminate entirely otherwise correct templates for whic h the location of the incident is incorrectly identified . This was responsible for several costly mistakes in th e evaluation .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Generatio n",
"sec_num": null
},
{
"text": "'This, of course, does not account for the situation in which a criminal has an alias, but in practice this occurs seldo m enough, and the effect of this mistake on the ability to produce correct template fills seems small enough that it is clearly a benefit to do so .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "It is difficult to evaluate the interpretation and template generation components individually . However , we have examined the first twenty messages of TST2 in detail and attempted to pinpoint the reason for eac h missing or incorrect entry in a template .There were 269 such mistakes, due to problems in 41 sentences . We have classified them into a numbe r of categories, and the results for the principal causes are as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CAUSES OF FAILURE S",
"sec_num": null
},
{
"text": "Mistakes An example of a missing simple axiom is that \"bishop\" is a profession . An example of a missing comple x axiom or theory is whatever it is that one must know to infer the perpetrator from the fact that a flag of a terrorist organization was left at the site of a bombing . An underconstrained axiom is one that allows, fo r example, \"damage to the economy\" to be taken as a terrorist incident . Unconstrained factoring is described above . An example of a lexicon error would be a possibly intransitive verb that was not correctly specified as intransitive . The syntax-pragmatics mismatches in logical form were representation decisions (generall y recent) that did not get reflected in either the syntax or pragmatics components . \"Combinatorics\" simpl y means that the theorem-prover timed out ; that this number was so low was a pleasant surprise for us .Note in these results that two incorrect lexical entries and problems in handling three unknown words wer e responsible for 23% of the mistakes . This illustrates the discontinuous nature of the mapping from processin g to evaluation . A difference of 6 in how a text is processed can result in a difference of considerably mor e than e in score . The lesson is that the scores cannot be used by themselves to evaluate a system . One must analyze its performance at a deeper, more detailed level, as we have tried to do here .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reason",
"sec_num": null
},
{
"text": "This research has been funded by the Defense Advanced Research Projects Agency under Office of Nava l Research contracts N00014-85-C-0013 and N00014-90-C-0220 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGEMENT S",
"sec_num": null
},
{
"text": "[1] Hobbs, Jerry R ., Stickel, Mark, Appelt, Douglas, and Martin, Paul, \"Interpretation as Abduction\", SR IInternational Artificial Intelligence Center Technical Note 499, December 1990 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REFERENCE S",
"sec_num": null
}
],
"bib_entries": {},
"ref_entries": {}
}
}