{ "paper_id": "M92-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:07.796087Z" }, "title": "GE NLTOOLSET : DESCRIPTION OF THE SYSTEM AS USED FOR MUC-4", "authors": [ { "first": "George", "middle": [], "last": "Krupka", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Paul", "middle": [], "last": "Jacobs", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lisa", "middle": [], "last": "Ra", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lois", "middle": [], "last": "Childs", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ira", "middle": [], "last": "Sider", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The GE NLTooLsET is a set of text interpretation tools designed to be easily adapted to ne w domains. This report summarizes the system and its performance on the MUG-4 task .", "pdf_parse": { "paper_id": "M92-1025", "_pdf_hash": "", "abstract": [ { "text": "The GE NLTooLsET is a set of text interpretation tools designed to be easily adapted to ne w domains. This report summarizes the system and its performance on the MUG-4 task .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The GE NLTooLsET aims at extracting and deriving useful information from text using a knowledge-based , domain-independent core of text processing tools, and customizing the existing programs to each new task . The program achieves this transportability by using a core knowledge base and lexicon that adapts easil y to new applications, along with a flexible text processing strategy that is tolerant of gaps in the program ' s knowledge base .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTIO N", "sec_num": null }, { "text": "The language analysis strategy in the NLTooLsET uses fairly detailed, chart-style syntactic parsin g guided by conceptual expectations . Domain-driven conceptual structures provide feedback in parsing, contribute to scoring alternative interpretations, help recovery from failed parses, and tie together information across sentence boundaries. The interaction between linguistic and conceptual knowledge sources at the leve l of linguistic relations, called \"relation-driven control\" was added to the system in a first implementation before MUC-4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTIO N", "sec_num": null }, { "text": "In addition to flexible control, the design of the NLTooLsET allows each knowledge source to influenc e different stages of processing . For example, discourse processing starts before parsing, although many decisions about template merging and splitting are made after parsing . This allows context to guide language analysis, while language analysis still determines context .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTIO N", "sec_num": null }, { "text": "The NLTooLsET, now in Version 3 .0, has been developed and extended during the three years since th e MUCK-II evaluation . During this time, several person-years of development have gone into the system . The fundamental knowledge-based strategy has remained basically unchanged, but various modules have bee n extended and replaced, and new components have been added while the system has served as a testbed fo r a variety of experiments . The only new module added for MUC-4 was a mechanism for dealing with spatia l and temporal information ; most of the other improvements to the system were knowledge base extensions , enhancements to existing components, and bug fixes .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTIO N", "sec_num": null }, { "text": "The next section briefly describes the major portions of the NLTooLsET and its control flow ; the remainder of the paper will discuss the application of the Toolset to the MUC-4 task .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTIO N", "sec_num": null }, { "text": "Processing in the NLTooLsET divides roughly into three stages : (1) pre-processing, consisting mainly o f a pattern matcher and discourse processing module, (2) linguistic analysis, including parsing and semanti c interpretation, and (3) post-processing, or template filling . Each stage of analysis applies a combination o f linguistic, conceptual, and domain knowledge, as shown in Figure 1 . The pre-processor uses lexico-semantic patterns to perform some initial segmentation of the text, identifying phrases that are template activators, filtering out irrelevant text, combining and collapsing som e linguistic constructs, and marking portions of text that could describe discrete events . This component i s described in [1] . Linguistic analysis combines parsing and word sense-based semantic interpretation wit h domain-driven conceptual processing . The programs for linguistic analysis are largely those explained i n [2, 3] -the changes made for MUC-4 involved mainly some additional mechanisms for recovering from faile d processing and heavy pruning of spurious parses . Post-processing includes the final selection of template s and mapping semantic categories and roles onto those templates . This component used the basic element s from MUCK-II, adding a number of specialized rules for handling guerrilla warfare, types, and refines th e discourse structures to perform the template splitting and merging required for MUC-3 and MUC-4 .", "cite_spans": [ { "start": 727, "end": 730, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 928, "end": 931, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 932, "end": 934, "text": "3]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "SYSTEM OVERVIEW", "sec_num": null }, { "text": "The control flow of the system is primarily from linguistic analysis to conceptual interpretation to domai n interpretation, but there is substantial feedback from conceptual and domain interpretation to linguisti c analysis . The MUC-4 version of the Toolset includes a version of a strategy called relation-driven control, which helps to mediate between the various knowledge sources involved in interpretation . Basically, relationdriven control gives each linguistic relation in the text (such as subject-verb, verb-complement, or verbadjunct) a preference score based on its interpretation in context . Because these relations can apply to a great many different surface structures, relation-driven control provides a means of combining preference s without the tremendous combinatorics of scoring many complete parses . Effectively, relation-driven contro l permits a \"beam\" strategy for considering multiple interpretations without producing hundreds or thousand s of new paths through the linguistic chart .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing Analysis", "sec_num": null }, { "text": "The knowledge base of the system, consisting of a feature and function (unification-style) grammar wit h associated linguistic relations, and a core sense-based lexicon, still proves transportable and largely generic . The core lexicon contains over 10,000 entries, of which 37 are restricted because of specialized usage in th e MUC-4 domain (such as device, which always means a bomb, and plant, which as a verb usually means to place a bomb and as a noun usually means the target of an attack) . The core grammar contains abou t 170 rules, with 50 relations and 80 additional subcategories . There were 23 MUC-specific additions to this grammatical knowledge base, including 8 grammar rules, most of them dealing with unusual noun phrase s that describe organizations in the corpus .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing Analysis", "sec_num": null }, { "text": "The control, pre-processing, and transportable knowledge base were all extremely successful for MUC-4 ; remarkably, lexical and grammatical coverage, along with the associated problems in controlling search an d selecting among interpretations, proved not to be the major stumbling blocks for our system . While th e program rarely produce an incorrect answer as a result of a sentence interpretation error, it frequently fail s to distinguish multiple events, resolve vague or subtle references, and pick up subtle clues from non-ke y sentences . These are the major areas for future improvements in MUC-like tasks .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing Analysis", "sec_num": null }, { "text": "Overview of Example TST2-0048 is faily representative of how the NLTooLSET performed on MUC-4 . The program successfully interpreted most of the key sentences but missed some references and failed to tie some additional informatio n in to the main event . As a result, it filled two templates for what should have been one event and misse d some additional fills . The program thus derived 53 slots out of a possible 52, with 34 correct, 19 missing , and 19 spurious for .65 recall, .64 precision, and .35 overgeneration . We made no special effort to adapt th e system or fix problems for this particular example ; in fact, we used TST2 as a \"blind\" test and did not d o any development on that set at all .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ANALYSIS OF TST2-004 8", "sec_num": null }, { "text": "This example is actually quite simple at the sentence level 1 : The sentences are fairly short and grammatical , especially when compared to some of the convoluted propaganda stories, and TRUMP had no real problems with them . The story is difficult from a discourse perspective, because it returns to the main event (th e attack on Alvarado) essentially without any cue after describing a background event (the attack on Merino' s home) . In addition, the story is difficult and a bit unusual in the implicit information that is captured i n the answer key-that the seven children, because they were home when Merino's house was attacked, ar e targets . Most of the difference between our system's response and the correct templates was due to thes e two story-level problems .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The program made one or two other minor mistakes ; for example, it was penalized for filling in \"INDI-VIDUAL\" as a perpetrator (from the phrase AN INDIVIDUAL PLACED A BOMB ON THE ROOF OF THE ARMORED VEHICLE), an apparently correct fill that could have been resolved to \"URBAN GUER-RILLAS\" . It missed the SOME DAMAGE effect for the vehicle, which should have been inferred from th e fact that the story later says the roof of the vehicle collapsed .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The system correctly parsed most of the main sentences, correctly linked the accusation in the firs t sentence to the murder of the Attorney General in the same sentence, and correctly separated the secon d event, which was distinguished by the temporal expression 5 days ago .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "As explained earlier, the Toolset uses pattern matching for pre-processing, followed by discourse processing, parsing and semantic interpretation, and finally template-filling . The pre-processor in this example filters out most of the irrelevant sentences (and, in this case, two of the relevant ones), recognizes mos t of the compound names (e .g . SALVADORAN PRESIDENT-ELECT ALFREDO CRISTIANI and AT-TORNEY GENERAL ROBERTO GARCIA ALVARADO) . The pre-processor marks phrases that activate templates (such as A BOMB PLACED and CLAIMED CREDIT), brackets out phrases like source an d location (ACCORDING TO CRISTIANI and IN DOWNTOWN SAN SALVADOR), and tags a few word s with part-of-speech to help the parser (e .g . auxiliaries (HAS), complementizers (THAT), and certain verb s following \"to\" (COLLAPSE)) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The last stage of pre-processing is a discourse processing module, which attempts a preliminary segmentation of the input story using temporal, spatial, and other cues, event types, and looking for certain definit e and indefinite descriptions of events . In this case, the module identifies five potential segments . The firs t three turn out to be different descriptions of the same event (the killing of Alvarado), but they are late r correctly merged into one template . The fourth segment is correctly identified as a new event (the attack o n Merino's home) . The fifth segment (describing the injury to Alvarado's bodyguards) is correctly treated a s a new description, but is never identified as being part of the same event as the attack on Alvarado .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "Linguistic analysis parses each sentence and produces (possibly alternative) semantic interpretations a t the sentence level . These interpretations select word senses and roles, heavily favoring domain-specific senses . The parser did fail in one important sentence in TST2-0048 : In the sentence \"A 15-YEAR-OLD NIECE O F I See Appendix F for the text and answer templates for the example . MERINO ' S WAS INJURED \" , it could not parse the apostrophe-s construct . This was a harmless failur e because it occurs between a noun phrase and a verb phrase, and one of the parser's recovery strategie s attaches any remaining compatible fragments that will contribute to a template fill .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The interpretation of each sentence is interleaved with domain-driven analysis . The conceptual analyzer, TRUMPET, takes the results of interpreting each phrase and tries to map them onto domain-base d expectations, determining, for example, the appropriate role for the FMLN in \"ACCUSED THE FMLN\" a s well as associating \"support\" events (such as accusations and effects) with main events (such as attacks o r bombings) . Because the discourse pre-processing module is prone to error, TRUMPET has begun to play a major role in resolving references as well as in guiding semantic interpretation .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "Post-processing maps the semantic interpretations onto templates, eliminating invalid fills (in this cas e none), combining certain multiple references (in the attack on Alvarado), and \"cleaning up\" the final output .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detail of Message Ru n", "sec_num": null }, { "text": "The TRUMP parser of the NLTooLSET successfully parsed and interpreted the first sentence (Si) and correctly applied conjunction reduction to get Cristiani as the accuser and get the `\"SUSPECTED OR AC-CUSED BY AUTHORITIES\" fill. Embedded clauses are typically handled in much the same way as main clauses, except that the main clauses often add information about the CONFIDENCE slot . The syste m correctly treats the main event and the accusing as a single event, in spite of ignoring the definite referenc e \"THE CRIME\" . In our system, linking an accusation (C-BLAME-TEMPLATE in the output below) to an event is the default .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation of Key Sentence s", "sec_num": null }, { "text": "The following is the pre-processed input and final sentence-level interpretation of Si :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation of Key Sentence s", "sec_num": null }, { "text": "Pre-processed input : ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation of Key Sentence s", "sec_num": null }, { "text": "The next set of examples sentences (S11-13) are more difficult . There was one parser failure, with a successful recovery . As we have mentioned, we correctly identify this as a new event based on tempora l information, but filter out S12 because it has no explicit event reference . This is not a bug-this sort of implicit target description is fairly infrequent, so we chose not to address it at this stage .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TRUMPET WAR' : Linking (special) C-REPORT-TEMPLATE as filler for R-SUPPORT of C-DEATH-TEMPLAT E TRUMPET WARN : Linking (special) C-BLAME-TEMPLATE as filler for R-SUPPORT of C-DEATH-TEMPLAT E Adding TERRORIST-1AME_FMLN1 from C-BLAME-TEMPLATE to R-PERPETRATOR of C-DEATH-TEMPLAT E", "sec_num": null }, { "text": "(3) When body parts are damaged (e .g . \"the bomb destroyed his head\"), it is the owner of the body part s that is affected . However, such rules only scratch the surface of the reasoning that contributes to templat e filling . While the reference resolution problem is quite general and very interesting from a research perspective , the reasoning problem seems more MUC-specific, and it's hard to separate general reasoning issues from th e peculiar details of the fill rules .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Activating new sens e (C-REPORT-TEMPLATE (R-REL-TIME *PAST* ) (R-MODALITY (C-QUALIFIER) ) (R-POLARITY (C-QUALIFIER) ) (R-OBJECT (C-DEATH-TEMPLATE", "sec_num": null }, { "text": "Aside from these problems, our system performed pretty well on this example, as for MUC on the whole . The recall and precision for this message were both over .60, with the program recovering most of th e information from the text . As is typical from our MUC experience, the local processing of sentences was very accurate and complete, while the general handling of story level details and template filling had som e loose ends .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Activating new sens e (C-REPORT-TEMPLATE (R-REL-TIME *PAST* ) (R-MODALITY (C-QUALIFIER) ) (R-POLARITY (C-QUALIFIER) ) (R-OBJECT (C-DEATH-TEMPLATE", "sec_num": null }, { "text": "MUC-4 is a very difficult task, combining language interpretation at many levels with a variety of rules an d strategies for template filling . The examples here illustrate some of the important characteristics of ou r system as well as where future progress can be made . Not surprisingly, the major problems that remai n after MUC-4 are very similar to the ones that we identified at the end of MUC-3 . This by itself might see m discouraging, but the fact that the system did much better on MUC-4 suggests that we can expect mor e improvements in the future . While there is a class of phenomena that we haven't really begun to address (the body of world knowledge that contributes to interpreting events), there is also the ripe problem o f interpreting text in context, in which MUC has given the field a leg up .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SUMMARY AND CONCLUSIO N", "sec_num": null } ], "back_matter": [ { "text": "The system filters S21 (this is an omission, because \"ESCAPED UNSCATHED\" should be recognize d as an effect), but successfully interprets S22 and resolves \"ONE OF THEM\" to \"BODYGUARDS\" . Note that it is the pronoun \"THEM\", not \"ONE\", that gets resolved, using a simple reference resolution heuristi c that looks for the most recent syntactically and semantically compatible noun phrase . However, this action results in a penalty rather than a reward because the system does not tie the injury to the attack on Alvarad o at the beginning of the story. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calling Trumpet with FRAGMENT Interpretation : (VERB_INJURE1 (R-REL-TIME *PAST+ ) (R-EFFECT (C-INJURY) ) (R-EFFECTED (NOUI_IIECE1 (R-/UMBER +SINGULAR* ) (R-DEFINITE (DET_A1) ) (R-POSSESSES (FULLIAME_FRAICISCO-MERI10+ 1 (R-LUMBER +SINGULAR* ) (A-LAME FRANCISCO-MERINO \u25ba ))))) ) Activating new sens e (C-INJURY-TEMPLATE (R-REL-TIME *PAST* ) (R-MODALITY (C-QUALIFIER) ) (R-POLARITY (C-QUALIFIER) ) (A-TARGET (NOUN_IIECE1 (R-NUMBER *SINGULAR* ) (R-DEFINITE (DET_A1) ) (R-POSSESSES (FULLIAME_FRANCISCO-MERINO+ 1 (R-NUMBER *SINGULAR* ) (R-NAME FRANCISCO-MERI10*))))) ) TRUMPET YARN : Linking (special) C-INJURY-TEMPLATE as filler for R-TARGET-EFFECT of C-BOMBING-TEMPLAT E", "sec_num": null }, { "text": "The NLTooLSET results for TST2-0048 were the following templates (Annotations have been added in lowe r case preceded by %, and blank slot (-) fills have been deleted to save space) . Some of the missing information in the response template comes from failing to tie information in t o the main event or failing to recover implicit information . This is the case with the damage to the vehicle , which is described in passing, the children who were in Merino's home, and the driver who escaped unscathed . Almost all the rest of the departures owe to some aspect of reference resolution-from failing to recognize th e injury to the bodyguards as part of Alvarado ' s murder, to the extra fills \"INDIVIDUAL\" and \"ATTORNE Y GENERAL\" that were co-referential with others . One of these turned out to be a simple bug, in that the titl e \"ATTORNEY GENERAL\" in our system was interpreted as a different type (GOVERNMENT OFFICIAL ) from the noun phrase \"ATTORNEY GENERAL\" (LEGAL OR JUDICIAL) ; thus the system failed to unify the references. However, the general problem of reference resolution is certainly one of the main areas wher e future progress can come .The other illustrative problem with this example is the degree to which relatively inconsequential fact s can be pieced together into an interpretation . There is no theoretical reason why our system didn't know about different forms of damage to vehicles, but we certainly wouldn ' t want to spend a lot of time encoding this sort of knowledge . This turned out to be a rather tedious part of the MUC task . We did go so far as to have template filling heuristics, for example, that tell the system : (1) When vehicles explode near buildings , it is the buildings and not the vehicles that are the targets, (2) When parts of buildings are destroyed o r damaged (e.g . \"the bomb shattered windows\") this means that the buildings sustained some damage, an d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Program Answers with Answer Ke y", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Lexico-semantic pattern matching as a companio n to parsing in text understanding", "authors": [ { "first": "Paul", "middle": [ "S" ], "last": "Jacobs", "suffix": "" }, { "first": "George", "middle": [ "R" ], "last": "Krupka", "suffix": "" }, { "first": "Lisa", "middle": [ "F" ], "last": "Rau", "suffix": "" } ], "year": 1991, "venue": "Fourth DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "337--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul S . Jacobs, George R . Krupka, and Lisa F . Rau . Lexico-semantic pattern matching as a companio n to parsing in text understanding . In Fourth DARPA Speech and Natural Language Workshop, page s 337-342, San Mateo, CA, February 1991 . Morgan-Kaufmann .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SCISOR: Extracting information from on-line news", "authors": [ { "first": "Paul", "middle": [], "last": "Jacobs", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Rau", "suffix": "" } ], "year": 1990, "venue": "Communications of th e Association for Computing Machinery", "volume": "33", "issue": "11", "pages": "88--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Jacobs and Lisa Rau . SCISOR: Extracting information from on-line news . Communications of th e Association for Computing Machinery, 33(11) :88-97, November 1990 .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "TRUMP : A transportable language understanding program", "authors": [ { "first": "Paul", "middle": [ "S" ], "last": "Jacobs", "suffix": "" } ], "year": 1992, "venue": "International Journal of Intelligent Systems", "volume": "7", "issue": "3", "pages": "245--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul S . Jacobs . TRUMP : A transportable language understanding program . International Journal of Intelligent Systems, 7(3) :245-276, 1992 .", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Stages of data extraction", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "byline : SAN SALVADOR, 19 APR 89 (ACAN-EFE) --] [bracket : [TEXT]] [fullname : SALVADORAN PRESIDENT-ELECT ALFREDO CRISTIIII] CONDEMNED THE TERRORIST KILLING OF [fullname : ATTORIEY GENERAL ROBERTO GARCI A ALVARADO] AND (comp : ACCUSED THE FARABUIDO MARTI NATIONAL LIBERATION FROIT [bracket : (FMLN)) OF) THE CRIME . out core templates (C-DEATH-TEMPLATE )", "type_str": "figure", "uris": null, "num": null } } } }