{ "paper_id": "M98-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:16:08.056888Z" }, "title": "NYU: Description of the Proteus PET System as Used for MUC-7 ST", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": {}, "email": "froman|grishmang@cs.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "M98-1011", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Through the h i s t ory of the MUC's, adapting Information Extraction IE systems to a n ew class of events has continued to b e a t ime-consuming a n d expensive t ask. Since MUC-6, the Information Extraction e ort at NYU has focused on the problem of portability a n d customization, especially at t h e scenario level. To begin to address this problem, we h a v e built a s e t o f t ools, which allow t h e u s e r t o a d apt the system to n ew scenarios rapidly by providing examples of events i n t ext, and examples of associated database entries to be created. The system automatically uses this information to create general patterns, appropriate for text analysis. The present system operates on two t iers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Proteus core extraction engine, an enhanced version of the o n e employed at MUC-6, 3 PET GUI front e n d, through which t h e u s e r i n t eracts with Proteus, as described recently in 5, 6 It is our hope that t h e example-based approach will facilitate t h e customization of IE engines; we are particularly interested, as are other sites, in providing t h e non-technical user such as a domain analyst, unfamiliar with system internals, with t h e capability t o perform IE e ectively in a xed domain.", "cite_spans": [ { "start": 187, "end": 189, "text": "5,", "ref_id": "BIBREF4" }, { "start": 190, "end": 191, "text": "6", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "In this paper we discuss the system's performance on the MUC-7 Scenario Template t ask ST. The t o pics covered in the following sections are: the Proteus core extraction engine; the example-based PET interface to Proteus; a discussion of how t h ese were used to accommodate t h e MUC-7 Space Launch scenario task. We conclude with t h e e v aluation of the system's performance and observations regarding possible areas of improvement. Figure 1 shows an overview of our IE system. 1 The system is a pipeline o f m o d ules, each drawing o n a t t endant knowledge bases KBs to process its input, and passes its o u t put t o t h e n ext module. The m o d ular design ensures that control is encapsulated in immutable, domain-independent core components, while the domain-speci c information resides in the knowledge bases. It is the l a t t er which n eed to be customized for each n ew domain and scenario, as discussed in the n ext section.", "cite_spans": [ { "start": 483, "end": 484, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 438, "end": 446, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The lexical analysis module LexAn is responsible for splitting t h e d o c u m ent i n t o s e n t ences, and t h e sentences into t okens. LexAn draws on a set of on-line dictionaries; these include t h e general COMLEX syntactic dictionary, a n d domain-speci c lists o f w ords and n ames. As the result, each t oken receives a reading, or a list of alternative readings, i n c a s e t h e t oken is syntactically ambiguous. A reading consists o f Figure 1 : IE system architecture a l i s t o f f e a t ures and t h eir values e.g., syntactic category = Noun\". LexAn optionally invoke s a s t a t istical part-of-speech t agger, which eliminates unlikely readings for each t oken.", "cite_spans": [], "ref_spans": [ { "start": 451, "end": 459, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "The n ext three phases operate b y d eterministic, bottom-up, partial parsing, or pattern matching; t h e patterns are regular expressions which trigger associated actions. This style of text analysis, as contrasted with full syntactic parsing , h as gained the wider popularity d ue to limitations on the accuracy of full syntactic parsers, and t h e a d equacy of partial, semantically-constrained, parsing for this task 3, 2 , 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "The name recognition patterns identify proper names in the t ext by u s i n g local contextual cues, such as capitalization, personal titles Mr.\", Esq.\", and company su xes Inc.\", Co.\". 2 The n ext module nds small syntactic units, such as basic NPs and VPs. When it identi es a phrase, the system marks the text segment with s e m antic information, e.g. the s e m antic class of the h ead of the phrase. 3 The n ext phase nds higher-level syntactic constructions using local semantic information: apposition, prepositional phrase attachment, limited conjunctions, and clausal constructions.", "cite_spans": [ { "start": 186, "end": 187, "text": "2", "ref_id": "BIBREF1" }, { "start": 406, "end": 407, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "The actions operate o n t h e logical form representation LF of the discourse segments encountered so far. The discourse is thus a sequence of LFs corresponding t o t h e e n t ities, relationships, and e v ents encountered in the a n alysis. A LF is an object with n amed slots see example in gure 2. One slot in each LF, named", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "Class\", has distinguished status, and d etermines the n u m ber and t ype of other slots t h a t t h e object may contain. E.g., an entity of class Company\" h as a slot called Name\". It also contains a slot Location\" which points t o another entity, t h ereby e s t a blishing a relation between the l o c a t ion entity a n d t h e m a trix entity. E v ents are speci c kinds of relations, usually having s e v eral operands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "The s u bsequent p h ases operate o n t h e logical forms built i n t h e p a t t ern matching p h ases. Reference resolution RefRes links anaphoric pronouns to t h eir antecedents a n d m erges other co-referring expressions. The discourse analysis module uses higher-level inference rules to build more complex event structures, where the information needed to extract a single complex fact is spread across several clauses. For example, there is a rule that m erge a Mission entity with a corresponding Launch event. At t his stage, we also convert all date expressions \"yesterday\", \"last month\", etc. to s t arting a n d e n ding d a t es as required for the MUC templates. Another set of rules formats t h e resultant L F i n t o s u c h a form as is directly translatable, in a one-to-one fashion, into t h e MUC template structure, the translation performed by t h e n al template-generation phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STRUCTURE OF THE PROTEUS IE SYSTEM", "sec_num": null }, { "text": "Our prior MUC experience has shown that building e ective p a t t erns for a new domain is a complex and t ime-consuming part of the customization process; it is highly error-prone, and usually requires detailed knowledge of system internals. With t his in view, we h a v e sought a disciplined method of customization of knowledge bases, and t h e p a t t ern base in particular.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PET USER INTERFACE", "sec_num": null }, { "text": "The p a t t ern base is organized in layers, corresponding t o di erent levels of processing. This strati cation naturally re ects t h e range of applicability o f t h e p a t t erns. At t h e l o w est level are the most general patterns; they are applied rst, and capture the most basic constructs. These include t h e proper names, temporal expressions, expressions for numeric entities, and currencies. At t h e n ext level are the p a t t erns that perform partial syntactic analysis noun a n d v erb groups. These are domain-independent p a t t erns, useful in a wide range of tasks. At t h e n ext level, are domain-speci c patterns, useful across a narrower range of scenarios, but s t ill having considerable generality. T h ese patterns nd relationships among e n t ities, such a s b e t w een persons and organizations. Lastly, a t t h e highest level will be the scenario-speci c patterns, such a s t h e clausal patterns that capture events. Proteus treats t h e p a t t erns at t h e di erent levels di erently. T h e l o w est level patterns, having t h e widest applicability, are built in as a core part of the system. These change little when the system is ported. The midrange patterns, applicable in certain commonly encountered domains, are provided as pattern libraries, which can be plugged in as required by t h e extraction task. For example, for the domain of business economic news\", Proteus has a library with p a t t erns that capture: entities organization company, person, location; relations person organization, organization location, parent organization subsidiary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization of Patterns", "sec_num": null }, { "text": "Lastly, t h e system acquires the most speci c patterns directly from the user, on a per-scenario basis, through PET, a set of interactive graphical tools. In the process of building t h e custom pattern base, PET engages the user only at t h e level of surface representations, hiding t h e i n t ernal operation. The user's input is reduced to providing examples of events o f i n t erest in text, and describing t h e corresponding output structures to be created. creates the a p propriate p a t t erns to extract the user-speci ed structures from the user-speci ed text suggests generalizations for the n ewly created patterns to broaden coverage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organization of Patterns", "sec_num": null }, { "text": "The initial pattern base consists o f t h e built-in patterns and t h e plugged-in pattern libraries corresponding to t h e domain of interest. These serve a s t h e foundation for example-based acquisition. The d evelopment cycle, from the user's perspective, consists o f i t eratively acquiring p a t t erns to a ugment t h e p a t t ern base. The acquisition process entails several steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Acquisition", "sec_num": null }, { "text": "Enter an example: the u s e r e n t ers a sentence containing a salient e v ent, or copies-pastes text from a document t hrough the corpus browser, a tool provided in the PET suite. We will consider the example Arianespace Co. has launched an Intelsat communications satellite.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Acquisition", "sec_num": null }, { "text": "Choose an event t emplate: the user selects from a menu o f e v ent n ames. A list of events, with t h eir associated slots, must be given to t h e system at t h e o u t set, as part of the scenario de nition. This example will generate a n e v ent called Launch\", with slots as in gure 4: Vehicle, Payload, Agent, Site, e t c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Acquisition", "sec_num": null }, { "text": "Apply existing p a t t erns: the system applies the current p a t t erns to t h e example, to obtain an initial analysis, as in gure 3. In the example shown, the system identi ed some noun verb groups and t h eir semantic types. For each element i t m a t c h es, the system applies minimal generalization, in the sense that t o be any less general, the element w ould have t o m a t c h t h e example text literally. The system then presents the a n alysis to t h e u s e r a Build pattern: when the user accepts\" it, the system builds a new pattern to m a t c h t h e example, and compiles the associated action; t h e action will re when the p a t t ern matches, and will ll the slots i n t h e event t emplate a s i n t h e example. The p a t t ern is then added to t h e p a t t ern base, which can be saved for later use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Acquisition", "sec_num": null }, { "text": "Syntactic generalization: Actually, t h e p a t t ern base would acquire much more than the basic pattern that the user accepted. The system applies built-in meta-rules 1, 4 , to produce a set of syntactic transformations from a simple active clause pattern or a bare noun phrase. For this, active example, the p a t t ern base will automatically acquire its v ariants: the passive, relative, relative passive, reduced relative, etc. 4 Proteus also inserts o ptional modi ers into t h e generated variants s u c h a s s e n t ence adjuncts, etc . , t o broaden the coverage of the p a t t ern. In consequence, a passive p a t t ern which t h e system acquires from this simple example will match t h e e v ent i n t h e w alk-through message, ... said Televisa expects a s e c ond Intelsat satellite to be launched b y A rianespace f r om French Guyana later this month ...\", with t h e h elp of lower-level patterns for named objects, and l o c a t ive a n d t emporal sentence adjuncts. 5", "cite_spans": [ { "start": 434, "end": 435, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Pattern Acquisition", "sec_num": null }, { "text": "This section describes how t h e Proteus PET system was adapted to accommodate t h e MUC-7 scenario. The scenario-speci c patterns were primarily o f t w o t ypes: those for launch e v ents NASA launched a rocket.\", The t errorists red a missile.\" and t h ose for missions the retrieval of a satellite\". Starting from patterns for simple active clauses, the system automatically generated patterns for the syntactic variants, such as the passive, relative, and r e d u ced relative clauses. The missions added information regarding payloads and mission functions to a l a u nch e v ent, but did not directly generate a l a u nch e v ent. In some cases, the mission was syntactically tied to a particular launch e v ent ... launched the s h u t t le to d eploy a s a t ellite\". If there was no direct connection, the post-processing inference rules attempted to t ie the mission to a l a u nch e v ent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERFORMANCE ON THE LAUNCH SCENARIO Scenario Patterns", "sec_num": null }, { "text": "Consider the e v ent in gure 4: the surface representation contains a generic Agent\" role. The agent can be of several types, e.g. it can be a launch v e hicle, an organization, or eve n a l a u nch s i t e, in case the agent i s a country. I n t his case, the role is lled by an organization, which, in principle, further admits t h e possibility of either the payload owner or the v e hicle owner. The scenario speci cation mandates that t h e f u nction of the agent\" be precisely speci ed, although at t h e surface it is underspeci ed. In this case, the f u nction can be determined on the basis of the s e m antic class of the agent, and t h e observation that t h e payload-owner slot is already occupied unambiguously by another organization entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": null }, { "text": "This type of computation is performed by scenario-speci c inference rules; in general, this determination can be quite complex. Translating t h e surface representations into t h ose mandated by t h e t ask speci cation can involve m any-to-many relations, such a s o n es that exist between payloads and l a u nch e v ents, where multiple payloads correspond t o a s i n gle event, and m ultiple launch e v ents are concerned with a s i n gle payload.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": null }, { "text": "One t echnique that a p peared fruitful in the L a u nch scenario was extending our set of inference rules with heuristics. Often a slot in an event cannot be lled, as when patterns fail to n d a syntactically suitable Here we n d t w o similar problems: concerning t h e l a u nch d a t e a n d t h e l a u nch s i t e. Our patterns recognize the corresponding l o c a t ive a n d t emporal noun phrases, however, because neither stands in a direct syntactic relation to t h e m ain launch e v ent clause here, headed by t h e v erb explode\", they fail to ll the a p propriate slots i n t h e e v ent. We use a simple heuristic rule to recover from this problem: if the l a u nch e v ent h as an empty d a t e, and i f t h e s e n t ence contains a unique expression of the correct type i.e. date, use the expression to ll the empty slot. We h a v e experimented with a v ariety o f h euristics for several slots, including organizations for vehicle and payload owners and m anufacturers, dates and s i t es. At present, the contribution of these heuristics to our score accounts for just under 10 of the F-measure. It is also apparent t h a t some o f t h e h euristics actually overgenerate, though we h a v e y et to a n alyze their e ect in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": null }, { "text": "We believe t h a t t h e o v erall approach of example-based pattern acquisition is more appropriate t h an automatic training from annotated corpora, as the amount of training d a t a for ST-level tasks is usually quite limited. We h a v e found p a t t ern editing t ool reasonably e ective. However, we discovered that m u c h o f t h e t ask involved creation and t u ning of post-processing rules and w e d o y et not have support in the t ool for this activity. This consumed a considerable part of the customization e ort . This points t o an important problem that n eeds to be addressed, especially for tasks where the structure of output t emplates di ers substantially from the structure of entities and e v ents as picked up by t h e syntactic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Rules", "sec_num": null }, { "text": "We did not speci cally focus on the T E t ask within the l a u nch scenario, and simply used the same system we h ad used for the S T t ask. Table 5 is a summary of the scores of our system.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Inference Rules", "sec_num": null }, { "text": "F o r a d etailed description of the system, see 3, 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "At present, the result of the NYU MENE system, as used in the N E e v aluation, does not yet feed into the ST processing.3 Thesemarks are pointers to the corresponding e n t ities, which are createdand added to the list of logical forms representing the discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The expert user can view the v ariants which t h e system generates,and m ake c h anges to them directly.5 The tools can be used to acquire non-clausal patterns as well, e.g. patterns for noun groups and complex noun phrases, to extend an existing pattern library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We w i s h t o t h ank Kristofer Franz en of Stockholm University for his assistance during t h e MUC-7 formal run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "SRI International FASTUS system: MUC-6 test results a n d a n alysis", "authors": [ { "first": "Douglas", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "Jerry", "middle": [], "last": "Hobbs", "suffix": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "" }, { "first": "David", "middle": [], "last": "Israel", "suffix": "" }, { "first": "Megumi", "middle": [], "last": "Kameyama", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Kehler", "suffix": "" }, { "first": "David", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "M", "middle": [], "last": "Bry Tyson", "suffix": "" } ], "year": 1995, "venue": "Proc. Sixth Message Understanding Conf. MUC-6", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Appelt, Jerry Hobbs, John Bear, David Israel, Megumi Kameyama, Andy Kehler, David Martin, Karen Meyers, and M a bry Tyson. SRI International FASTUS system: MUC-6 test results a n d a n alysis. In Proc. Sixth Message Understanding Conf. MUC-6, Columbia, MD, November 1995. Morgan Kaufmann.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "FASTUS: A nite-state processor for information extraction from real-world text", "authors": [ { "first": "Douglas", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "Jerry", "middle": [], "last": "Hobbs", "suffix": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "" }, { "first": "David", "middle": [], "last": "Israel", "suffix": "" }, { "first": "M", "middle": [], "last": "Bry Tyson", "suffix": "" } ], "year": 1993, "venue": "Proc. 13th Int'l Joint Conf. Arti cial Intelligence IJCAI-93", "volume": "", "issue": "", "pages": "1172--1178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Appelt, Jerry Hobbs, John Bear, David Israel, and M a bry Tyson. FASTUS: A nite-state proces- sor for information extraction from real-world text. In Proc. 13th Int'l Joint Conf. Arti cial Intelligence IJCAI-93, pages 1172 1178, August 1993.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The NYU system for MUC-6, or where's the syntax", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proc. Sixth Message Understanding Conf., pages", "volume": "167", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman. The NYU system for MUC-6, or where's the syntax. In Proc. Sixth Message Under- standing Conf., pages 167 176, Columbia, MD, November 1995. Morgan Kaufmann.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The NYU system for MUC-6 or where's the syntax?", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1995, "venue": "Proc. Sixth Message Understanding Conf. MUC-6", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman. The NYU system for MUC-6 or where's the syntax? In Proc. Sixth Message Under- standing Conf. MUC-6, Columbia, MD, November 1995. Morgan Kaufmann.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Information extraction: Techniques and c h allenges", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1997, "venue": "Lecture Notes in Arti cial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman. Information extraction: Techniques and c h allenges. In Maria Teresa Pazienza, editor, Information Extraction. Springer-Verlag, Lecture Notes in Arti cial Intelligence, Rome, 1997.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Customization of information extraction systems", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 1997, "venue": "Proc. International Workshop on Lexically Driven Information Extraction, F rascati", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roman Yangarber and Ralph Grishman. Customization of information extraction systems. In Paola Ve- lardi, editor, Proc. International Workshop on Lexically Driven Information Extraction, F rascati, Italy, July 1997. Universit a di Roma.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "LF for the NP: a satellite built b y Loral Corp. of New York for Intelsat\"" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Initial analysisBased on this information, PET automatically" }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "n d initiates an interaction with h er: npC-company vgLaunch npSatellite Tune p a t t ern elements: the user can modify each p a t t ern element i n s e v eral ways: choose the a p propriate level of generalization of its concept class, within the s e m antic concept hierarchy; force the element t o m a t c h t h e corresponding t ext in the original example literally; m ake t h e element o ptional; remove it; etc. In this example, the u s e r s h ould likely generalize satellite\" t o m a t c h any phrase designating a payload, and generalize the v erb launch\" t o a class containing i t s synonyms, e.g. re\": npC-company vgC-Launch npC-Payload Fill event slots: the user speci es how p a t t ern elements are used to ll slots i n t h e e v ent t emplate. Clicking on an element displays its logical form LF. The user can drag-and-drop t h e LF, or any s u b-component thereof, into a slot in the t arget event, as in gure 4." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Event LF corresponding t o a clause" } } } }