{ "paper_id": "M91-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:28.027368Z" }, "title": "ITP INTERPRETEXT SYSTEM: MUC-3 TEST RESULTS AND ANALYSI S", "authors": [ { "first": "Kathleen", "middle": [], "last": "Dahlgre", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carol Lord Hajime Wada Joyce McDowel l Edward P . Stabler", "location": { "region": "Jr" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "M91-1010", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Intelligent Text Processing is a small start-up company participating in the MUC-3 exercise fo r the first time this year . Our system, Interpretext, is based on a prototype text understandin g system. With three full-time and three part-time people, dividing time between MUC-3 and othe r contract projects, ITP made maximum use of modest resources . Figure 1 . The ITP system was second highest in precision (46%) when all templates were considered, and at the same time achieved a credible recall percentage (20%) . Our overgeneration rate was second best (34%) . ITP was a very close second in both precisio n and overgeneration, as the top percentages were 48 and 33 to ITP's 46 and 34 . The major limitin g factor in ITP's MUC-3 performance was parser failure. We are building a parser with wide coverage and a comprehensive approach to disambiguation . Because our parser is not yet complete, in order to participate in the MUC-3 exercise we used a parser on loan .", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 360, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It proved to lack the robustness necessary to parse the MUC-3 messages, failing on 50% o f the sentences . For those sentences which it did parse, the Interpretext system returned precise semantic interpretations . ITP's word-based approach required minimal reorientation in shifting to the new domain of terrorism texts; the main new material was the straightforward addition of a relatively small number of new words to the syntactic and naive semantic lexicons, not whole ne w semantic modules . The semantic structures and analyses already implemented proved to be appropriate for texts in the new domain .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The source of the precision in our performance was the Cognitive Model built by the Natura l Language Understanding Module . The Cognitive Model contains specific reference marker s identifying events and individuals in the text . The same events and individuals are given the sam e reference markers by the Anaphora Resolution Module . The Cognitive Model distinguishe s between events, individuals and sets . It directly displays the argument structure of events . Thus , to find a terrorist incident, the template-filling code looked for an event which implied harm , damage or some other consequence of terrorism in the Naive Semantics for the verb naming th e event. The agent of the event had to be described as having a role in clandestine activity, th e government or the military . The ITP naive semantic lexicon distinguishes between nouns which names objects and nouns which name events, so that the template-filling code had only to look fo r events, even those introduced by phrases such as the destruction of homes in .. . Furthermore, the Cognitive Model connects head nouns with prepositional phrase modifier s and adjectival or nominal modifiers via the same reference marker . Thus the template-filling code could look for a variety of modifiers of an individual as a source of information about the individual . For example, the phrase member of the guerrilla troop connects member with troop and guerrilla, so that the template-filling code could recognize a semantically empty term lik e member as referring to an agent. This type of connection works everywhere, not just with the particular string pattern member of the guerrilla troop . Furthermore, it is much more precise than a pattern-matching method which would find guerrilla as perpetrator everywhere it occurs, eve n when a phrase like \"member of the guerrilla troop\" is the object of a verb which implies harm, and is therefore not indicative of guerrilla terrorism .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Another source of precision is that the formal semantic module interprets the cardinality o f sets . \"None\", \"plural\" or \"three\" come out in the formal representation as the number of objects i n a set. Finding target number and amount of injury and damage is trivial given a precise treatmen t of cardinality in the formal semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Finally, the Cognitive Model indicated discourse segments . These are portions of the tex t which function as a unit around one topic . The recognition of segments simplified the anaphor a resolution and the process of identifying the same individuals and events with each other . It prevented the overgeneration of templates . Some competitor systems generated a new template for each sentence containing a terrorism word and then they had to try to merge them . Without segment information, merging was very difficult .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A Cognitive Model with this level of precision can be built only when a deep natural languag e analysis of the text is performed . Syntactic, formal semantic, discourse semantic and pragmatic (o r naive semantic) complexities of text are addressed by the ITP Natural Language Understandin g Module. Some researchers have rejected a principled linguistic approach as hopeless at this stag e in the history of computational linguistic research. They assume that the only feasible methods ar e statistical. Such systems match to certain string patterns and rely upon the statistical probability that they co-occur with a particular semantic interpretation . The problem is that many times th e pattern occurs in phrase which is irrelevant, or has the opposite meaning to the predicted one . The pattern can occur in the scope of a negative or modal, as in the bomb did not explode, and produce a false alarm for a pattern-matching method . Such methods will tend to over-generate templates , because patterns indicate a terrorist incident where there is none . For the same false alarm texts , more precise linguistic analysis can correctly rule out a terrorist incident .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Furthermore, the patterns for matching must be coded anew for each domain . In contrast , ITP Naive Semantic and syntactic lexicons need only be built once, and they work across al l domains. For MUC-3 we added to an existing naive semantic lexicon prepared originally for text s in other domains .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In summary, ITP was precise in the MUC-3 fills for the sentences which our loaner parser was able to process . When our own parser is available, ITP's technology will vastly improve i n recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The basic approach to template-filling involved looking at feature types in the naive semanti c knowledge for verbs and nouns . The feature types inspected had already been present in th e theory and in the system prior to MUC-3 . The verb feature \"consequence of event\" was importan t for recognizing terrorist incidents, because if the typical consequence of an event was damage or harm, it triggered a template fill . The theory of Naive Semantics as described in Dahlgren [1 ] identifies that feature type as important in lexical semantics and reasoning about discourse . Similarly, the \"rolein\" feature was used to distinguish between clandestine agents, governmen t agents and military agents . Again, that feature type was antecedently present in our theory .", "cite_spans": [ { "start": 476, "end": 480, "text": "[1 ]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Naive Semantic s", "sec_num": null }, { "text": "The effect of the MUC-3 reader was to exclude any sentences which did not contain a terro r word, saving processing time . This setting tended to reduce precision, because a sentence like She succeeded contains no terrorism word, but could be very significant in the recognition of a terroris t incident . Recall was implicitly set very low by the fact that the parser was able to parse only 50 % of the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Settings", "sec_num": null }, { "text": "The greatest effort by ITP was the six years of research that went into the Natural Languag e Understanding Module . As for MUC-3-specific tasks, Table I indicates the level of effort on eac h one. ITP made a detailed linguistic analysis of the terrorism domain, and the way that terroris t incidents were described in the first messages sent out by NOSC, and in the DEV messages . The analysis guided the expansion of the lexicons and the writing of the template-filling code . During Test 1 we identified both parser failure and parse time to be problems in our performance . Therefore, for Test 2 we built a reader which could handle dates, abbreviations, and so on, an d would return a sentence only if it contained a terrorism word. In addition, we pruned the output to shorten sentences for the parser . These tactics will not be necessary once our own wide-coverag e parser is completed. The template-filling code took about as much of our time as the reader and pruner. Each element of the code reasons from the Cognitive Model using generalized lexica l reasoning or DRS reasoning . The temporal-locative reasoning is general and will be used in other applications. ", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 153, "text": "Table I", "ref_id": null } ], "eq_spans": [], "section": "Level of Effort", "sec_num": null }, { "text": "The main limiting factors were the parser and resources . With more persons and time, we could have written code for all of the fills and debugged the template-filling code thoroughly . Given the modest resources we had, we were forced to run the test before we had thoroughl y debugged the code . In particular, our code for recognizing and building up proper names was i n place, but failed during the test in most cases . That explained our performance on Perpetrato r Organization . Given that we missed the latter, we of course could not get Category of Inciden t correct for any of the State-sponsored Violence cases either .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limiting Factor", "sec_num": null }, { "text": "Training took place on the first 100 DEV messages, and on Test 1 messages with the ne w key. We did not have sufficient resources to fully debug and repeatedly test prior to MUC-3 week. The system improved dramatically between Test 1 and Test 2 (from recall of 3 to recall of 20) . Improvement was mainly due to expansion of the template-filling code and the introduction o f pruning to get more parses .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": null }, { "text": "For those sentences which we were able to parse, the reasoning performed well for inciden t recognition, segmentation (separating different incidents in the same message), perpetrator an d target recognition . The only exceptions were perpetrators or targets with long proper names . We have an approach to these, but didn't get it working in time . The fills which failed were perpetrator organization (because of names), and target nationality . The latter code is working fine (it looks t o see whether any descriptor of an individual is a foreign nation name or adjective) . The failures were due to missing the whole template because of parsing, or missing the target in a recognize d template. In addition, our target number code was not fully operational at the time of the test . We would most like to rewrite the template-filling code in even more general reasoning algorithm s which could be used in applications beyond the terrorism domain . Our system's capabilities mak e possible a question-answering system which could reply to English queries like Who did it? and How many people were killed? .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Success and Failur e", "sec_num": null }, { "text": "Everything but the template-filling code is reusable in a different application . All of th e words we added to the lexicons have all of their senses common in American English . They can be used in any domain. As for the template-filling code, we plan to extract generalizable reasonin g algorithms for use in other domains . Again, the code is reusable because it is a principled, general linguistic approach rather than a pattern-matching approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reusability", "sec_num": null }, { "text": "We learned that anything a person wants to say or write can be said in an extremely larg e number of different ways . Therefore, a robust deep natural language understanding system mus t have a wide-coverage parser and formal semantics which directly display the similarity of conten t across many possible forms of expression . A sound theoretical approach such as DRT i s particularly appropriate for a data extraction task. Secondly, we learned that natural language systems require ample testing against real-world texts . And, third, a system in which word meanings are central, developed to interpret text in the domains of geography and finance, can function in the domain of terrorism with the addition of a relatively small number of lexical items .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What we learned", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Naive Semantics for Natural Language Understanding", "authors": [ { "first": "K", "middle": [], "last": "Dahlgren", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dahlgren, K. (1988) . Naive Semantics for Natural Language Understanding . Kluwer Academic Publishers, Norwell, Mass .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Coherence Relation Assignment", "authors": [ { "first": "K", "middle": [], "last": "Dahlgren", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the Cognitiv e Science Society", "volume": "", "issue": "", "pages": "588--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dahlgren,K. (1989) . \"Coherence Relation Assignment,\" in Proceedings of the Cognitiv e Science Society, pp.588-596 .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Theory of Truth and Semantic Representation", "authors": [ { "first": "H", "middle": [], "last": "Kamp", "suffix": "" } ], "year": 1981, "venue": "Formal Methods in the Study of Language, Mathematisch Centrum", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kamp, H. (1981) . \"A Theory of Truth and Semantic Representation, \" in Gronendijk, J . ; T. Janssen ; and M . Stokhof, editors, Formal Methods in the Study of Language, Mathematisch Centrum, Amsterdam .", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Intelligent Text Processing Final Scores Test 2 ITP's results are shown in" }, "TABREF2": { "num": null, "content": "
TasksEstimated Person-weeks
Linguistic analysis of terrorism domain4
Syntactic Lexicon expansion2
Naive Semantic Lexicon expansion3
Reader, pruner4
Temporal, locative reasoning2
Template-filling code4
", "type_str": "table", "html": null, "text": "MUC-3 specific Tasks and their Estimated Person-Weeks" } } } }