{ "paper_id": "M93-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:36.530484Z" }, "title": "NEW YORK UNIVERSITY : DESCRIPTION OF THE PROTEUS SYSTEM AS USED FOR MUC-5", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Proteus system which we have used for MUC-5 is largely unchanged from that used for MUC-3 and MUC-4. It has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The Proteus syntactic analyzer was developed starting in the fall of 1984 as a common base for all the applications of the Proteus Project. Many aspects of it s design reflect its heritage in the Linguistic String Parser, previously developed an d still in use at New York University. The current system, including the Restrictio n Language compiler, the lexical analyzer, and the parser proper, compris e approximately 4500 lines of Common Lisp .", "pdf_parse": { "paper_id": "M93-1016", "_pdf_hash": "", "abstract": [ { "text": "The Proteus system which we have used for MUC-5 is largely unchanged from that used for MUC-3 and MUC-4. It has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The Proteus syntactic analyzer was developed starting in the fall of 1984 as a common base for all the applications of the Proteus Project. Many aspects of it s design reflect its heritage in the Linguistic String Parser, previously developed an d still in use at New York University. The current system, including the Restrictio n Language compiler, the lexical analyzer, and the parser proper, compris e approximately 4500 lines of Common Lisp .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The semantic analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application, extended for the MUCK-II (OPREPS) application, and ha s been incrementally revised since . It currently consists of about 3000 lines o f Common Lisp (excluding the domain-specific information) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The template generator was written from scratch for the MUC-5 joint ventur e task ; it is about 1200 lines of Common Lisp .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The text goes through the five major stages of processing : lexical analysis , syntactic analysis, semantic analysis, reference resolution, and template generatio n (see figure 1) . In addition, some restructuring of the logical form is performed bot h after semantic analysis and after reference resolution (only the restructuring after reference resolution is shown in figure 1 ) . Processing is basically sequential : each sentence goes through lexical, syntactic, and semantic analysis and referenc e resolution ; the logical form for the entire message is then fed to template generation . However, semantic (selectional) checking is performed during syntactic analysis , employing essentially the same code later used for semantic analysis . ", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 179, "text": "figure 1)", "ref_id": "FIGREF0" }, { "start": 371, "end": 379, "text": "figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Stages of processing", "sec_num": null }, { "text": "Lexical", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Sources", "sec_num": null }, { "text": "Our dictionaries contain only syntactic information : the parts of speech for each word, information about the complement structure of verbs, distributiona l information (e .g ., for adjectives and adverbs), etc . We follow closely the set o f syntactic features established for the NYU Linguistic String Parser . This informatio n is entered in LISP form using noun, verb, adjective, and adverb macros for the openclass words, and a word macro for other parts of speech :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analysi s Dictionary Forma t", "sec_num": null }, { "text": "(ADVERB \"ABRUPTLY\" :ATTRIBUTES (DSA) ) (ADJECTIVE \"ABRUPT\" ) (NOUN :ROOT \"ABSCESS\" :ATTRIBUTES (NCOUNT) ) (VERB :ROOT \"ABSCOND\" :OBJLIST (NULLOBJ PN (PVAL (FROM WITH))) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analysi s Dictionary Forma t", "sec_num": null }, { "text": "The noun and verb macros automatically generate the regular inflectional forms .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analysi s Dictionary Forma t", "sec_num": null }, { "text": "The primary source of our dictionary information about open-class words (nouns , verbs, adjectives, and adverbs) is the machine-readable version of the Oxfor d Advanced Learner's Dictionary (\"OALD\") . We have written programs which take th e SGML (Standard Generalized Markup Language) version of the dictionary, extrac t information on inflections, parts of speech, and verb subcategorization (includin g information on adverbial particles and prepositions gleaned from the examples), an d generate the LISP-ified form shown above . This is supplemented by a manually-code d dictionary (about 1500 lines, 900 entries) for closed-class words, words not adequatel y defined in the OALD, and a few very common words . In addition, we used severa l specialized dictionaries for MUC-5, including a location dictionary (with al l countries, continents, and major cities (CITY1 or , PORT1 in the gazetteer), a dictionar y of corporate designators, a dictionary of job titles, and a dictionary of currencies .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary File s", "sec_num": null }, { "text": "The text reader splits the input text into tokens and then attempts to assign to eac h token (or sequence of tokens, in the case of an idiom) a definition (part of speech an d syntactic attributes) . The matching process proceeds in five steps : dictionar y lookup, lexical pattern matching, spelling correction, prefix stripping, and defaul t definition assignment . Dictionary lookup immediately retrieves definitions assigne d by any of the dictionaries (including inflected forms) . The specialized dictionarie s are stored in memory, while the main dictionary is accessed from disk (using hashe d index random access) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "Lexical pattern matching is used to identify a variety of specialized patterns, suc h as numbers, dates, times, and possessive forms . The set of lexical patterns wa s substantially expanded for MUC-5 to include various forms of people's names , company names, locations, and currencies .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "The lexical patterns are further discusse d below, in the \"What's new for MUC-5\" section .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "If neither dictionary lookup nor lexical pattern matching is successful, spellin g correction and prefix stripping are attempted . For words of any length, we identify an input token as a misspelled form of a dictionary entry if one of the two has a single instance of a letter while the other has a doubled instance of the letter (e .g . , \"mispelled\" and \"misspelled\") . The prefix stripper attempts to identify the token as a combination of a prefix (e .g .,\"un\") and a word defined in the dictionary . If all of these procedures fail, we assign a default definition .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "In mixed case text , undefined capitalized words are tagged as proper nouns ; undefined lower case word s are tagged as common nouns . In monocase text, all undefined words are tagged a s proper nouns .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "Syntactic analysis involves two stages of processing : parsing and syntacti c regularization . At the core of the system is an active chart parser . The grammar i s an augmented context-free grammar, consisting of BNF rules plus procedura l restrictions which check grammatical constraints not easily captured in the BN Associated with each BNF rule is a regularization rule, which computes th e regularized form of each node in the parse tree from the regularized forms of its immediate constituents . These regularization rules are based on lambda-reduction, a s in GPSG . The primary function of syntactic regularization is to reduce all clauses to a standard form consisting of aspect and tense markers, the operator (verb o r adjective), and syntactically marked cases . For example, the definition of assertion , the basic S structure in our grammar, i s :(s !( )) .", "cite_spans": [ { "start": 317, "end": 319, "text": "BN", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "Here the portion after the single colon defines the regularized structure .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "Coordinate conjunction is introduced by a metarule (as in GPSG), which is applie d to the context-free components of the grammar prior to parsing . The regularizatio n procedure expands any conjunction into a conjuntion of clauses or of noun phrases .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "The output of the parser for the first sentence of 0592, \"BRIDGESTONE SPORTS CO .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "(SENTENC E (CENTER S (CENTE R (ASSERTIO N (SUBJEC T (NSTG (LNR (NVAR (NAMESTG (LNAMER (N \"BRIDGESTONE\" \"SPORTS\" \"CO\" \" .\")))))) ) (VERB (LTVR (TV \"SAID\")) ) (SA (SA-VAL (NSTGT (NSTG (LNR (NVAR (N \"FRIDAY\")))))) ) (OBJEC T (ASSERTION (SUBJECT (NSTG (LNR (NVAR (PRO \"IT\")))) ) (VERB (LTVR (TV \"HAS\")) ) (OBJEC T (VENO (LVENR (VEN \"SET\" \"up\") ) (OBJEC T (NSTG O (NSTG (LNR (LN (TPOS (LTR (T \"A\")))) (NVAR (N \"JOINT\" \"venture\") ) The system uses a chart parser operating top-down, left-to-right . As edges are completed (i .e ., as nodes of the parse tree are built), restrictions associated with thos e productions are invoked to assign and test features of the parse tree nodes .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "(RN (RN-VAL (PN (P \"IN\" ) (NSTGO (NSTG (LNR (NVAR (NAMESTG (LNAMER (N \"TAIWAN\")))))))) ) (MORE-RN (RN-VA L (PN (P \"WITH\" ) (NSTGO (NSTG (LNR (LNR (LN (TPOS (LTR (T \"A\")) ) (APOS (APOSVAR (AVAR (ADJ \"LOCAL\")))) ) (NVAR (N \"CONCERN\")) ) (CONJ-WORD (\"AND\" \"AND\") ) (LNR (LN (TPOS (LTR (T \"A\")) ) (APOS (APOSVAR (AVAR (ADJ \"JAPANESE\")))) ) (NVAR (N \"TRADING\" \"house\"))))))) ) (MORE-R N (RN-VA L (TOVO (TO (\"TO\" \"TO\")) (LVR (V \"PRODUCE\") ) (OBJEC T (NSTG O (NST G (LNR (LN (NPOS (NPOSVAR (N \"GOLF\"))) ) (NVAR (N \"CLUBS\"))))) ) (SA (SA-VA L (TOVO (TO (\"TO\" \"TO\")) (LVR (V \"BE\") ) (OBJEC T (OBJECTB E (VENPASS (LVENR (VEN \"SHIPPED\") ) (SA (SA-VA L (PN (P \"TO\" ) (NSTG O (NST G (LN R (NVA R (NAMESTG (LNAMER (N \"JAPAN\")))))))))))))))))))))))))))))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "If a restriction fails, that edge is not added to the chart . When certain levels of the tree are complete (those producing noun phrase and clause structures), th e regularization rules are invoked to compute a regularized structure for the partia l parse, and selection is invoked to verify the semantic well-formedness of th e structure (as noted earlier, selection uses the same \"semantic analysis\" cod e subsequently employed to translate the tree into logical form) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "One unusual feature of the parser is its weighting capability . Restrictions ma y assign scores to nodes ; the parser will perform a best-first search for the parse tree with the highest score . This scoring is used to implement various preferenc e mechanisms :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "\u2022 closest attachment of modifiers (we penalize each modifier by the number o f words separating it from its head )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "\u2022 preferred narrow conjoining for clauses (we penalize a conjoined claus e structure by the number of words it subsumes )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "\u2022 preference semantics (selection does not reject a structure, but imposes a heav y penalty if the structure does not match any lexico-semantic model, and a lesser penalty if the structure matches a model but with some operands or modifiers left over) [2, 3 ] \u2022 relaxation of certain syntactic constraints, such as the count noun constraint , adverb position constraints, and comma constraints [4] . The grammar is gradually being enlarged to cover more of the LSP grammar .", "cite_spans": [ { "start": 252, "end": 255, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 256, "end": 259, "text": "3 ]", "ref_id": "BIBREF2" }, { "start": 394, "end": 397, "text": "[4]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "Th e current grammar is 1600 lines of BNF and Restriction Language plus 300 lines of Lisp ; it includes 186 non-terminals, 464 productions, and 132 restrictions .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "Over the course of the MUCs we have added several mechanisms for recoverin g from sentences the grammar cannot fully parse . For MUC-5, we found that the mos t effective was our \"fitted parse\" mechanism, which attempts to cover the sentenc e with noun phrases and clauses, preferring the longest noun phrases or clauses which can be identified", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUSE TO PRODUCE GOLF CLUBS TO BE SHIPPED TO JAPAN . \" i s", "sec_num": null }, { "text": "The output of syntactic analysis goes through semantic analysis and referenc e resolution and is then added to the accumulating logical form for the message . Following both semantic analysis and reference resolution certain transformation s are performed to simplify the logical form . All of this processing makes use of a concept hierarchy which captures the class/subclass/instance relations in th e domain .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "Semantic analysis uses a set of lexico-semantic models to map the regularize d syntactic analysis into a semantic representation . Each model specifies a class o f verbs, adjectives, or nouns and a set of operands ; for each operand it indicates th e possible syntactic case markers, the semantic class of the operand, whether or no t the operand is required, and the semantic case to be assigned to the operand in th e output representation . For example, the model for \" forms a joint venture wit h \" i s The models are arranged in a shallow hierarchy with inheritance, so tha t arguments and modifiers which are shared by a class of verbs need only be stated once . The model above inherits only from the most general clause model, clause-any , which includes general clausal modifiers such as negation, time, tense, modality, etc . The MUC-5 system has 61 clause models, 2 nominalization models, and 45 other nou n phrase models, a total of about 1700 lines . The class C -mu c 5 -entity in the clause mode l refers to the concept in the concept hierarchy, whose entries have the form :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "(defconcept C-muc5-entity ) (defconcept C-company :typeof C-muc5-entity ) (defconcept C-government-or-country :typeof C-muc5-entity ) (defconcept C-venture :typeof C-company ) (defconcept C-joint-venture :typeof C-venture )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "This inheritance mechanism is also used to define word classes, such as the w -f or mventure class :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "(defconcept W-form-venture ) (defconcept form :typeof W-form-venture ) (defconcept establish :typeof W-form-venture ) (defconcept expand :typeof W-form-venture ) (defconcept launch :typeof W-form-venture ) (defconcept set-up :typeof W-form-venture )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "There are currently a total of 154 concepts in the hierarchy .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "The output o f semantic analysis is a nested set of entity and event structures, with argument s labeled by keywords primarily designating semantic roles .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "For the first sentence o f 0593, the output is ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Analysis And Reference Resolutio n", "sec_num": null }, { "text": "Reference resolution is applied to the output of semantic analysis in order t o replace anaphoric noun phrases (representing either events or entities) b y appropriate antecedents . Each potential anaphor is compared to prior entities o r events, looking for a suitable antecedent such that the class of the anaphor (in th e concept hierarchy) is equal to or more general than that of the antecedent, th e anaphor and antecedent match in number, the restrictive modifiers in the anapho r have corresponding arguments in the antecedent, and the non-restrictive modifier s (e .g ., apposition) of the anaphor are not inconsistent with those of the antecedent . Special tests are provided for names, since people and companies may be referred to a subset of their full names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference resolutio n", "sec_num": null }, { "text": "The transformations which are applied after semantic analysis and afte r reference resolution simplify and regularize the logical form in various ways . Th e transformations after semantic analysis primarily standardize the attribute structure of entities so that reference resolution will work properly . The transformation s after reference resolution simplify the task of template generation by casting th e events in a more uniform framework and performing a limited number o f inferences . For example, we show here a rule which transforms the logical for m produced from \"X formed a joint venture with Y\" into the equivalent for \"X and Y formed a joint venture\" :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical form transformation s", "sec_num": null }, { "text": "(((event :predicate ?predicat e :identifier ?id l :agent ?agen t :joint-venture (entity . ?R1 ) (entity :identifier ?id2 . ?R3 ) (condition (isa '?predicate 'C-tie-up)) ) --> ((modify 1 (list :agent (conjoin-entitie s '?agent '?company-list-2)) ) (modify 2 '( :agent nil :tied-up t))) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical form transformation s", "sec_num": null }, { "text": "There are currently 32 such rules . These transformations are written a s productions and applied using a simple data-driven production system interprete r which is part of the Proteus system .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical form transformation s", "sec_num": null }, { "text": "Once all the sentences in an article have been processed through syntacti c analysis, semantic analysis, and the logical form transformations, the resultin g logical forms are sent to the template generator . The logical form events and entitie s produced by the transformations are in close correspondence to the template object s needed for MUC-5, so the template generation is fairly straightforward . The greates t complexity was involved in the procedures for accessing the two large data bases, th e gazetteer (for normalizing locations) and the Standard Industrial Classification (for classifying industries) . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Template generato r", "sec_num": null }, { "text": "Error Rat e Curve 80 Figure 2 showns how our recall gradually improved over the development period . Precision remained within a fairly narrow range, from 47 to 63, throughout the testing . Five months were available for development (March -July) . One person was assigned full-time for the entire period ; a second person assisted, approximately 2/ 3 time, for the last three months, for a total of about 7 person-months of effort (thi s excludes time in August preparing for the conference) . March and April were devoted to getting an initial understanding of the fill rules, making minimal lexica l scanner additions so that we could parse the text, developing input code to handle the different article formats, and developing some routines for larger-scale patter n matching (which were eventually not used) . System integration and integrate d system testing did not begin until mid-May, a couple of weeks before the dry run . Daily system testing began with a set of 25 articles, but shifted after the dry run t o the first 100 dry-run messages (with the second 100 dry-run messages being used o n occasion as a blind test) . In comparison with earlier MUCs, the overhead of getting started -understanding the fill rules, handling the different article formats, generating th e more complex templates, and using the various data bases (gazetteer, SIC, currenc y table, corporate designator table) --was much greater than for prior MUCs, while th e manpower we had for the project was in fact somewhat less . In consequence, our system is relatively less developed than our MUC-3 system, for example . I n particular, the attribute structure for the principal entity types (for MUC-5 , companies) were less developed ;", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 2", "ref_id": "FIGREF5" }, { "start": 1343, "end": 1405, "text": "(gazetteer, SIC, currenc y table, corporate designator table)", "ref_id": null } ], "eq_spans": [], "section": "Learning", "sec_num": null }, { "text": "this adversely impacted the performance of ou r reference resolution component and hence our event merging . This impact was evident in our performance on the walkthrough message, 0593 . We identified the primary constituent events (the joint venture and the associate d ownership relations), but we failed to identify several of the co-reference relations , because o f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": null }, { "text": "\u2022 a bug in the handling of appositional names followed by relative clause s failure to do spelling correction on names (we only correct spellings to matc h dictionary entries ) shortcomings in the attribute structure of company entitie s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": null }, { "text": "Because of these problems and a weak event merging rule (compared to the mor e detailed rules developed for MUC-4, for example), we generated two separate tie-up s for the article, instead of one .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": null }, { "text": "The system was also not tuned to any significant degree to take advantage of th e MUC-5 scoring rules . Based on a suggestion by Boyan Onyshkevych, we conducted a small experiment after the conference. Because one is told in advance that almos t every article in the corpus will have a reportable event, we modified the system t o generate a tie-up between a Japanese company and an Indonesian company (the tw o most frequent nationalities in the training corpus) whenever the text analysi s components were not able to find a tie up . This simple strategy reduced our error rate on the test corpus by 2% .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning", "sec_num": null }, { "text": "The Proteus system has a pattern matcher based on regular expressions wit h provision for procedural tests, which is intended for identifying lexical units befor e parsing .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analyzer", "sec_num": null }, { "text": "Prior to MUC-5, the system employed a small number of patterns, fo r structures such as dates, times, and numbers .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analyzer", "sec_num": null }, { "text": "The set of patterns was substantiall y enlarged for MUC-5, to include patterns for diffeent types of currencies, for company names, for people's names, for locations, and for names of indeterminate type .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analyzer", "sec_num": null }, { "text": "I n mixed-case text, we used capitalization as the primary indication of the beginning of a name ; in monocase text, we employed BBN's part-of-speech tagger and looked for proper noun tags .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analyzer", "sec_num": null }, { "text": "The lexical scanner and the constraints of the lexico-semantic models acted i n concert to classify names . If there was a clear lexical clue (a corporate designator a t the end of a name, a title (\"Mr.\", \"President\", . . .) or middle initial in a personal name) , the type was assigned by the lexical scanner . If the type of a name could not be determined by the scanner, but the name occurred in a context where only one typ e was allowed (e .g ., as the object of \"own\"), the type would be assigned as a side effect o f applying the lexico-semantic model .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical Analyzer", "sec_num": null }, { "text": "We have spent considerable time over the last two years building tools to acquir e semantic patterns and semantic word similarities from corpora [5, 6] , and we ha d hoped that these would be of significant benefit in our MUC-5 efforts, particularly i n broadening our system's coverage . However, we did not have much opportunity t o use these tools, since so much of our time was consumed in building an initial syste m at some minimal performance level .", "cite_spans": [ { "start": 145, "end": 148, "text": "[5,", "ref_id": "BIBREF5" }, { "start": 149, "end": 151, "text": "6]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Pattern and Similarity Acquisitio n", "sec_num": null }, { "text": "The lexico-sematic models as used previously specified a single level in th e regularized parse tree structure : either a clause with its arguments and modifiers, or an NP with its modifiers . We have found it increasingly valuable, however, to be abl e to specify ' larger patterns which involve several parse tree levels, such as \"X signe d an agreement with Y to do Z\", or \"X formed a joint venture with Y to do Z\" . We have therefore extended our system in order to allow for such larger patterns, and permi t the developer to specify the predicate structure into which this larger pattern shoul d be mapped .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nested Semantic Models", "sec_num": null }, { "text": "Once we began to allow these larger patterns, we found that the task of writin g such patterns correctly became quite challenging . Our long-term goal is to enable a user to add such patterns, but we seemed (with the added complexity) to be movin g further from this goal . We therefore implemented a \"model builder\" interface whic h allows the developer to enter a prototype sentence and the corresponding predicat e structure which should be produced . The interface then creates the required lexicosemantic patterns and mapping rules .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Builde r", "sec_num": null }, { "text": "For example, to handle constructs of the form \"company signed an agreemen t with company to . . .\", the developer would enter the sentence companyl signed (an agreement with company2 to act3) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Builde r", "sec_num": null }, { "text": "(where the braces, which are optional, indicate the NP bracketing) and would giv e the corresponding predicate (c-agree :agent companyl :co-agent company2 :event act3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Builde r", "sec_num": null }, { "text": "The system would then create models and mapping rules appropriate to a sentenc e such as \"IBM signed an agreement with Apple to form a joint venture .\" Since thes e rules apply to the syntactically analyzed sentence, they would also handle syntacti c variants such as \"The agreement to create the new venture was signed last week b y IBM and Ford . \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Builde r", "sec_num": null } ], "back_matter": [ { "text": "The development of the entire PROTEUS system has been sponsored primarily b y the Advanced Research Projects Agency as part of the Strategic Computing Program , under Contract N00014-85-K-0163 and Grant N00014-90-J-1851 from the Office o f Naval Research .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SPONSORSHI P", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Preference Semantics for Messag e Understanding", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1989, "venue": "Proc . DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J . Preference Semantics for Messag e Understanding . Proc . DARPA Speech and Natural Language Workshop, Morga n Kaufman, 1990 (proceedings of the conference at Harwich Port, MA, Oct . 15-18, 1989) .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Information Extraction and Semanti c", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J ., Information Extraction and Semanti c", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Constraints . Proc. 13th Int'l Conf Computational Linguistics (COLING 90)", "authors": [], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Constraints . Proc. 13th Int'l Conf Computational Linguistics (COLING 90), Helsinki , August 20-25, 1990 .", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural Language Information Processing", "authors": [ { "first": "N", "middle": [], "last": "Sager", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, N. Natural Language Information Processing, Addison-Wesley, 1981 .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Acquisition of Selectional Patterns", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": null, "venue": "Proc. 14t h", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R., and Sterling, J . Acquisition of Selectional Patterns . Proc. 14t h", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Intl Conf. Computational Linguistics (COLING 92)", "authors": [], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Intl Conf. Computational Linguistics (COLING 92), Nantes, France, July 23-28, 1992.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Smoothing of Automatically Generate d Selectional Constraints", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1993, "venue": "Proc. ARPA Human Language Technology Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J . Smoothing of Automatically Generate d Selectional Constraints . Proc. ARPA Human Language Technology Workshop, March 21-24, 1993 .", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Proteus System Structure .", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": ") (ENDMARK (\" .\" \" .\")) ) and the corresponding regularized structure i s (S SAY (VTENSE PAST ) (SUBJECT (NP A-COMPANY SINGULAR (NAMES (\"BRIDGESTONE\" \"SPORTS\" \"CO\")) (SN NP154)) ) (OBJECT (S SET-UP (SUBJECT (NP IT SINGULAR (SN NP156)) ) (OBJECT (NP JOINT-VENTURE SINGULAR (SN NP258) (T-POS A ) (IN (NP A-COUNTRY SINGULAR (NAMES (\"Taiwan\")) (SN NP163)) ) (WITH (AND (NP CONCERN SINGULAR (SN NP166) (T-POS A) (A-POS LOCAL) ) (NP TRADING-HOUSE SINGULAR (SN NP171) (T-POS A) (A-POS JAPANESE))) ) (RN-TOVO (S PRODUCE (SUBJECT ANYONE ) (OBJECT (NP CLUB PLURAL (SN NP180) (N-POS (NP GOLF SINGULAR (SN NP170)))) ) (IN-ORDER-T O (S SHIP (SUBJECT ANYONE) (OBJECT PRO ) (TO (NP A-COUNTRY SINGULAR (NAMES (\"Japan\")) (SN NP177)))))))) ) (ASPECT PERF) (VTENSE PRESENT)) ) (TIMEPREP (NP FRIDAY SINGULAR (SN NP157))) )", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "add-clause-model :id 'clause-form :parent 'clause-any :constraint 'W-form-ventur e :class 'C-form :adjuncts (list (make-specifie r :marker 'subjec t :class 'C-muc5-entit y :case :agent ) (make-specifie r :marker 'wit h :class 'C-muc5-entit y :case :company-list-2 ) (make-specifie r :marker 'objec t :class 'C-joint-ventur e :essential-required 'require d :relaxable ni l :case :joint-venture)) )", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "L :MODEL-ID CLAUSE-PRODUCE ) :AGENT (ENTITY :IDENTIFIER N00 9 :CLASS C-MUC5-ENTIT Y :SET T :MEMBERS ((ENTITY :DETERMINER A :ID NP-JOINT-VENTURE-1 :IDENTIFIER N00 5 :CLASS C-JOINT-VENTURE ) :AGENT (ENTITY :SN NP15 6 :MODEL-ID NP-ANY :IDENTIFIER N00 4 :CLASS C-MUC5-ENTITY ) :MODEL-ID CLAUSE-FORM ) :AGENT (ENTITY :SN NP15 4 :NAMES (\"BRIDGESTONE\" \"SPORTS\" \"CO\" ) :MODEL-ID NP-COMPAN Y :IDENTIFIER N000000000 2 :CLASS C-COMPANY ) :TENSE PAS T :MODEL-ID CLAUSE-SAY)", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "class C-joint-venture :tied-up ni l :agent ?company-list-", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "Recall (Dry Run, part 1 )", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "html": null, "text": "The grammar is based on Harris's Linguistic String Theory and adapted from th e larger Linguistic String Project (LSP) grammar developed by Naomi Sager at NYU", "content": "
\u2022disfavoring (penalizing) headless noun phrases and headless relatives (this i s
important for parsing efficiency )
", "num": null, "type_str": "table" } } } }