{ "paper_id": "M91-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:31.411992Z" }, "title": "GE: DESCRIPTION OF THE NLTOOLSET SYSTEM AS USED FO R MUC-3", "authors": [ { "first": "George", "middle": [], "last": "Krupka", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory GE Research and Developmen t Schenectady", "institution": "", "location": { "postCode": "12301", "region": "NY", "country": "US A" } }, "email": "krupka@crd.ge.com" }, { "first": "Paul", "middle": [], "last": "Jacobs", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory GE Research and Developmen t Schenectady", "institution": "", "location": { "postCode": "12301", "region": "NY", "country": "US A" } }, "email": "" }, { "first": "Lisa", "middle": [], "last": "Rau", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory GE Research and Developmen t Schenectady", "institution": "", "location": { "postCode": "12301", "region": "NY", "country": "US A" } }, "email": "" }, { "first": "Lucja", "middle": [], "last": "Iwaiisk", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory GE Research and Developmen t Schenectady", "institution": "", "location": { "postCode": "12301", "region": "NY", "country": "US A" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The GE NLTooLSET is a set of text interpretation tools designed to be easily adapted to new domains. This report summarizes the system and its performance on the MUG-3 task. INTRODUCTIO N The GE NLTooLsET aims at extracting and deriving useful information from text using a knowledge-based , domain-independent core of text processing tools, and customizing the existing programs to each new task. The program achieves this transportability by using a core knowledge base and lexicon that adapts easil y to new applications, along with a flexible text processing strategy that is tolerant of gaps in the program' s knowledge base. The NLTooLSET's design provides each system component with access to a rich hand-coded knowledg e base, but each component applies the knowledge selectively, avoiding the computation that a complet e analysis of each text would require. The architecture of the system allows for levels of language analysis , from rough skimming to in-depth conceptual interpretation. The NLTooLSET, in its first version, was behind GE 's participation in the MUCK-II conference. Sinc e MUCK-II, the Toolset, now in Release 2 .1, has expanded to include a number of new capabilities, includin g a text pre-processor for easier customization and better performance, broader lexical and syntactic coverage , and a domain-independent module for applying word-sense preferences in text. In addition to being teste d in several new application areas, the Toolset has achieved about a 10 times speedup in words per minute s over MUCK-II, and can now partially interpret and tag word senses in arbitrary news stories, although it i s very difficult to evaluate this task-independent performance. These basic enhancements preceded the other additions, including a discourse processing module ;, which were made for MUC-3. The performance of the program on tasks such as MUCK-II and MUC-3 derives mainly from two design characteristics : central knowledge hierarchies and flexible control strategies. A custom-built 10,000 word-root lexicon and 1000-concept hierarchy provides a rich source of lexical information. Entries are separated b y their senses, and contain special context clues to help in the sense-disambiguation process. A morphologica l analyzer contains semantics for about. 75 affixes, and can automatically derive the meanings of inflecte d entries not separately represented in the lexicon. Domain-specific words and phrases are added to th e lexicon by connecting them to higher-level concepts and categories present in the system 's core lexicon and concept hierarchy. Lexical analysis can also be restricted or biased according to the features of a domain. This is one aspect of the NLTooLSET that makes it highly portable from one domain to another. The language analysis strategy in the NLTooLSET uses fairly detailed, chart-style syntactic parsin g guided by conceptual expectations. Domain-driven conceptual structures provide feedback in parsing, contribute to scoring alternative interpretations, help recovery from failed parses, and tie together information across sentence boundaries. The interaction between linguistic and conceptual knowledge sources at the leve l of linguistic relations, called \"relation-driven control\" was a key system enhancement before MUC-3. In addition to flexible control, the design of the NLTooLSET allows each knowledge source to influenc e different stages of processing. For example, discourse processing starts before parsing, although many decisions about template merging and splitting are made after parsing. This allows context to guide languag e analysis, while language analysis still determines context .", "pdf_parse": { "paper_id": "M91-1022", "_pdf_hash": "", "abstract": [ { "text": "The GE NLTooLSET is a set of text interpretation tools designed to be easily adapted to new domains. This report summarizes the system and its performance on the MUG-3 task. INTRODUCTIO N The GE NLTooLsET aims at extracting and deriving useful information from text using a knowledge-based , domain-independent core of text processing tools, and customizing the existing programs to each new task. The program achieves this transportability by using a core knowledge base and lexicon that adapts easil y to new applications, along with a flexible text processing strategy that is tolerant of gaps in the program' s knowledge base. The NLTooLSET's design provides each system component with access to a rich hand-coded knowledg e base, but each component applies the knowledge selectively, avoiding the computation that a complet e analysis of each text would require. The architecture of the system allows for levels of language analysis , from rough skimming to in-depth conceptual interpretation. The NLTooLSET, in its first version, was behind GE 's participation in the MUCK-II conference. Sinc e MUCK-II, the Toolset, now in Release 2 .1, has expanded to include a number of new capabilities, includin g a text pre-processor for easier customization and better performance, broader lexical and syntactic coverage , and a domain-independent module for applying word-sense preferences in text. In addition to being teste d in several new application areas, the Toolset has achieved about a 10 times speedup in words per minute s over MUCK-II, and can now partially interpret and tag word senses in arbitrary news stories, although it i s very difficult to evaluate this task-independent performance. These basic enhancements preceded the other additions, including a discourse processing module ;, which were made for MUC-3. The performance of the program on tasks such as MUCK-II and MUC-3 derives mainly from two design characteristics : central knowledge hierarchies and flexible control strategies. A custom-built 10,000 word-root lexicon and 1000-concept hierarchy provides a rich source of lexical information. Entries are separated b y their senses, and contain special context clues to help in the sense-disambiguation process. A morphologica l analyzer contains semantics for about. 75 affixes, and can automatically derive the meanings of inflecte d entries not separately represented in the lexicon. Domain-specific words and phrases are added to th e lexicon by connecting them to higher-level concepts and categories present in the system 's core lexicon and concept hierarchy. Lexical analysis can also be restricted or biased according to the features of a domain. This is one aspect of the NLTooLSET that makes it highly portable from one domain to another. The language analysis strategy in the NLTooLSET uses fairly detailed, chart-style syntactic parsin g guided by conceptual expectations. Domain-driven conceptual structures provide feedback in parsing, contribute to scoring alternative interpretations, help recovery from failed parses, and tie together information across sentence boundaries. The interaction between linguistic and conceptual knowledge sources at the leve l of linguistic relations, called \"relation-driven control\" was a key system enhancement before MUC-3. In addition to flexible control, the design of the NLTooLSET allows each knowledge source to influenc e different stages of processing. For example, discourse processing starts before parsing, although many decisions about template merging and splitting are made after parsing. This allows context to guide languag e analysis, while language analysis still determines context .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The next section briefly describes the major portions of the NLTooLSET and its control flow ; the remainder of the paper will discuss the application of the Toolset to the MUC-3 task .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Processing in the NLTooLSET divides roughly into three stages : (1) pre-processing, consisting mainly o f a pattern matcher and discourse processing module, (2) linguistic analysis, including parsing and semantic interpretation, and (3) post-processing, or template filling . Each stage of analysis applies a combination o f linguistic, conceptual, and domain knowledge, as shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 391, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "SYSTEM OVERVIEW", "sec_num": null }, { "text": "Tagging The pre-processor uses lexico-semantic patterns to perform some initial segmentation of the text, identifying phrases that are template activators, filtering out irrelevant text, combining and collapsing som e linguistic constructs, and marking portions of text that could describe discrete events . This component is described in [1] . Linguistic analysis combines parsing and word sense-based semantic interpretation with domain-driven conceptual processing . The programs for linguistic analysis are largely those explained i n [2, 3] -the changes made for MUC-3 involved mainly some additional mechanisms for recovering from failed processing and heavy pruning of spurious parses . Post-processing includes the final selection of template s and mapping semantic categories and roles onto those templates . This component used the basic element s from MUCK-II, adding a number of specialized rules for handling guerrilla warfare, types, and refines th e discourse structures to perform the template splitting and merging required for MUC-3 .", "cite_spans": [ { "start": 339, "end": 342, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 539, "end": 542, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 543, "end": 545, "text": "3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Syntax", "sec_num": null }, { "text": "The control flow of the system is primarily from linguistic analysis to conceptual interpretation to domai n interpretation, but there is substantial feedback from conceptual and domain interpretation to linguistic analysis . The MUC-3 version of the Toolset includes our first implementation of a strategy called relation-drive n control, which helps to mediate between the various knowledge sources involved in interpretation . Basically, relation-driven control gives each linguistic relation in the text (such as subject-verb, verb-complement, o r verb-adjunct) a preference score based on its interpretation in context . Because these relations can apply t o a great many different surface structures, relation-driven control provides a means of combining preference s without the tremendous combinatorics of scoring many complete parses . Effectively, relation-driven control permits a \"beam\" strategy for considering multiple interpretations without producing hundreds or thousand s of new paths through the linguistic chart .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax", "sec_num": null }, { "text": "The knowledge base of the system, consisting of a feature and function (unification-style) gramma r with associated linguistic relations, and the lexicon mentioned earlier, still proves transportable and largel y generic . The core lexicon contains over 10,000 entries, of which 37 had to be restricted because of specialize d usage in the MUC-3 domain (such as device, which always means a bomb, and plant, which as a verb usually means to place a bomb and as a noun usually means the target of an attack) . The core grammar contains about 170 rules, with 50 relations and 80 additional subcategories . There were 23 MUC-specific addition s to this grammatical knowledge base, including 8 grammar rules, most of them dealing with unusual nou n phrases that describe organizations in the corpus .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax", "sec_num": null }, { "text": "The control, pre-processing, and transportable knowledge base were all extremely successful for MUG-3 ; remarkably, lexical and grammatical coverage, along with the associated problems in controlling searc h and selecting among interpretations, proved not to be the major stumbling blocks for our system-furthe r distinguishing events and merging or splitting templates proved to be the major obstacle in obtaining a better score .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntax", "sec_num": null }, { "text": "The common \"walkthrough\" example, TST1-0099, is a good example of many of the problems in analysi s and template filling, although it is somewhat unrepresentative of the difficulties in parsing because the ke y content is contained in fairly simple sentences . We will explain briefly what our program did, then provid e details of the story-level and sentence-level interpretation with an analysis of the templates produced .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ANALYSIS OF TST1-009 9", "sec_num": null }, { "text": "In many ways, TST1-0099 is representative of how the Toolset performed on MUC-3 . The program parsed most of the key sentences, failed to parse some of the less relevant sentences, missed a key relationshi p between locations-thus failing to split a template into two separate events-and incorrectly included a n earlier bombing as part of a main event in the story . One additional program fill was scored incorrect becaus e the answer key had the wrong date . The program thus derived 36 slots out of a possible 43, with 21 correct , 2 partial, 2 incorrect, and 11 spurious, for 51% recall, 61% precision, and 30% overgeneration .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Exampl e", "sec_num": null }, { "text": "As explained in the previous section, the Toolset uses pattern matching for pre-processing, followed b y discourse processing, parsing and semantic interpretation, and finally template-filling .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "Pre-processing uses pattern matching to manipulate the input text . It recognizes relevant fillers, tags , collapses constructs, and segments the text into fragments describing different events . For example, th e pattern matcher recognizes the phrase describing the bombing event in the first sentence of the text, collapses the conjunctive phrase the embassies of the PRC and the Soviet Union, and marks that as a complementize r (rather than a relative pronoun, pronoun, or determiner) . In later sentences, it also marks locative phrases like in the Lima residential district of San Isidro and located in Orrantia district . The discourse processin g module does an initial text segmentation based on (1) definite and indefinite references like a car bomb an d the attack, (2) the relationship between events (e .g . bombing and arson), and (3) cue phrases . This identifies six events :", "cite_spans": [ { "start": 777, "end": 780, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(1) the phrase bombed describes a bombing event ; the phrase the bombs marks its continuation ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(2) the phrase a car bomb exploded signifies a new bombing event ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(3) the temporal cue phrase meanwhile combined with the phrase two bombs in indefinite form signifie s another bombing event ; the phrases the bombs and the attacks mark the continuation of this event ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(4) the phrase a Shining Path bombing indicates yet another bombing event ; the sentence gets delete d because the temporal information in the phrase some three years ago violates MUC-3 constraints ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(5) the cue phrase in another incident combined with the phrases killed and dynamite delineates anothe r bombing event ; the sentence gets deleted because the temporal information from the phrase three years ag o violates MUC-3 constraints ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "(6) the phrase burned indicates a new arson event .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "Linguistic analysis parses each sentence and produces (possibly alternative) semantic interpretations at the sentence level . These interpretations select word senses and roles, heavily favoring domain-specific senses .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "Post-processing maps the semantic interpretations onto templates, eliminating invalid fills (in this cas e none), combining certain multiple references (such as to the embassies), and adding certain information (lik e adding numbers and \"guessing\" terrorist groups as fillers when it fails to find evidence to the contrary) . Post-processing collapses three of the segments (events) produced during pre-processing into one template . 6 . ORG-PERPS \"MAOIST \\\"SHINING PATH\\\" GROUP \" \"GUEVARIST \\\"TUPAC AMARU REVOLUTIONARY MOVEMENT\\\" \" 7. PERP-CONF POSSIBLE : \"MAOIST \\\"SHINING PATH\\\" GROUP \" POSSIBLE : \"GUEVARIST \\\"TUPAC AMARU REVOLUTIONARY MOVEMENT\\\" \" 8. PHYS-TGT-ID \"VEHICLES \" \"EMBASSIES OF THE PRC \" \"EMBASSIES OF THE PRC AND THE SOVIET UNION \" 9. PHYS-TGT-NUM 4 10 . PH-TGT-TYPE TRANSPORT VEHICLE : \"VEHICLES \" DIPLOMAT OFFICE OR RESIDENCE : \"EMBASSIES OF THE PRC \" DIPLOMAT OFFICE OR RESIDENCE : \"EMBASSIES OF THE PRC AND THE SOVIET UNION \" 11. HUM-TGT-ID \"SOVIET MARINES \" 12. HUM-TGT-NUM 1 5 13. HM-TGT-TYPE ACTIVE MILITARY : \"SOVIET MARINES \" 14. TARGET-NAT PEOPLES REP OF CHINA : \"EMBASSIES OF THE PRC \" USSR : \"EMBASSIES OF THE PRC AND THE SOVIET UNION \" USSR : 17 . PHYS-EFFECT SOME DAMAGE : \"BUSES \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "18 . HUM-EFFECT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The failure to recognize that San Isidro and Orrantia are distinct locations caused the program to combin e the two bombings into one template (under the mistaken assumption that Orrantia is in San Isidro) . We do not know now why the program did not fill in the City name \"Lima\", although this would not have affecte d the score . As a result of the location assumption, the Toolset got two extra fills for slots 8 and 10 in th e first template (effectively by merging the templates), missed slot 9 entirely (because the number of target s is different), and got one extra fill in slot 14, in addition to partial credit for the location .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The program correctly discarded the bombing of the bus in 1989 but failed to group the 15 wounde d Soviet marines correctly with that event (because of a simple bug which caused the deletion of the earlie r event before the wounding effect was processed), thus losing points also in Template 1, slots 11, 12, 13, 14 , and 18 (even though slot 18 was correct except for the cross-reference) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The program got the correct date from tonight, October 25 . The answer key has a range, October 24-25 . The second template produced by the Toolset was completely correct, but the score for this message is 51% precision and 61% recall, mainly due to the combining of two possible templates into one .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "MUC-3 is a very difficult task, involving a combination of language interpretation, conceptual and domain knowledge, along with many rules and strategies fo l. template filling . The examples given here show not onl y how our system performs this, but hopefully some of the limitations of the system and the penalties paid i n the scoring for these mistakes . While it is very difficult to attribute effects in the score to particular functions of the programs, there is no question that the task adequately exercises most of the current features of ou r system . It is equally clear that there is ample room for improvements from promising research areas, suc h as implicit event reference, discourse processing and representation, and general reference, as well as from task-specific processing and more well-known problems such as general inference .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUMMARY AND CONCLUSIO N", "sec_num": null } ], "back_matter": [ { "text": "The following is the trace of the first two sentences of TST1-0099 : The \"call\" to Trumpet represents the en d of the first stage of semantic interpretation and the beginning of conceptual analysis, and the output tha t follows this represents the mapping (or role extension) from semantic roles to templates . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-level Interpretatio n", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Lexico-semantic pattern matching as a companio n to parsing in text understanding", "authors": [ { "first": "Paul", "middle": [ "S" ], "last": "Jacobs", "suffix": "" }, { "first": "George", "middle": [ "R" ], "last": "Krupka", "suffix": "" }, { "first": "Lisa", "middle": [ "F" ], "last": "Rau", "suffix": "" } ], "year": 1991, "venue": "Fourth DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul S . Jacobs, George R . Krupka, and Lisa F . Rau . Lexico-semantic pattern matching as a companio n to parsing in text understanding . In Fourth DARPA Speech and Natural Language Workshop, San Mateo , CA, February 1991 . Morgan-Kaufmann .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SCISOR : Extracting information from on-line news", "authors": [ { "first": "Paul", "middle": [], "last": "Jacobs", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Rau", "suffix": "" } ], "year": 1990, "venue": "Communications of th e Association for Computing Machinery", "volume": "33", "issue": "11", "pages": "88--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Jacobs and Lisa Rau . SCISOR : Extracting information from on-line news . Communications of th e Association for Computing Machinery, 33(11) :88-97, November 1990 .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "TRUMP : A transportable language understanding program", "authors": [ { "first": "P", "middle": [], "last": "Jacobs", "suffix": "" } ], "year": 1991, "venue": "International Journal of Intelligent Systems", "volume": "6", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Jacobs . TRUMP : A transportable language understanding program . International Journal of Intelli- gent Systems, 6(4), 1991 .", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Stages of data extraction", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Trumpet with Sexp : (C-CAUSING (R-CAUSE (C-BOMB (R-DEFINITE (DET_THE1))) ) (R-EFFEC T (COORDCONJ_BUT1 (R-PART1 (NOUN_DAMAGE1) ) (R-PART2 (C-INJURY (R-POLARITY (DET_NO1)))))) ) TRUMPET WARN : Breaking out core template s (C-INJURY-TEMPLATE C-DAMAGE-TEMPLATE ) Assuming C-BOMBING-TEMPLATE and C-INJURY-TEMPLATE have same R-INSTRUMEN T Assuming C-BOMBING-TEMPLATE and C-DAMAGE-TEMPLATE have same R-INSTRUMEN T Assuming C-BOMBING-TEMPLATE and C-INJURY-TEMPLATE have same R-DAT E Assuming C-BOMBING-TEMPLATE and C-DAMAGE-TEMPLATE have same R-DAT E Assuming C-BOMBING-TEMPLATE and C-INJURY-TEMPLATE have same R-PERPETRATO R Assuming C-BOMBING-TEMPLATE and C-DAMAGE-TEMPLATE have same R-PERPETRATO R Assuming C-BOMBING-TEMPLATE and C-INJURY-TEMPLATE have same R-TARGE T Assuming C-BOMBING-TEMPLATE and C-DAMAGE-TEMPLATE have same R-TARGE T Comparison of Program Answers with Answer Ke yThe NLTooLsET results for TST1-0099 were the following templates :", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "\"SOVIET MARINES \" 15. INST-TYPE * 16. LOCATION PERU : SAN ISIDRO (NEIGHBORHOOD) : ORRANTIA (DISTRICT )17. PHYS-EFFECT SOME DAMAGE : \"VEHICLES \" SOME DAMAGE : \"EMBASSIES OF THE PRC AND THE SOVIET UNION \" SOME DAMAGE : \"EMBASSIES OF THE PRC \"", "type_str": "figure" } } } }