{ "paper_id": "M91-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:23.604641Z" }, "title": "NEW YORK UNIVERSIT Y DESCRIPTION OF THE PROTEUS SYSTEM AS USED FOR MUC-3", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Catherine", "middle": [], "last": "Macleo", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The PROTEUS system which we have used for MUC-3 has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The PROTEUS Syntactic Analyzer was developed starting in the fall of 1984 as a common base for all th e applications of the PROTEUS Project. Many aspects of its design reflect its heritage in the Linguistic Strin g Parser, previously developed and still in use at New York University. The current system, including the Restriction Language compiler, the lexical analyzer, and the parser proper, comprise approximately 4500 lines of Common Lisp. The Semantic Analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application , extended for the MUCK-II (OPREPs) application, and further revised for the current evaluation. It currently consists of about 3000 lines of Common Lisp (excluding the domain-specific information). The Template Generator was written from scratch for the MUC-3 task; it is about 1200 lines of Commo n Lisp. .", "pdf_parse": { "paper_id": "M91-1028", "_pdf_hash": "", "abstract": [ { "text": "The PROTEUS system which we have used for MUC-3 has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The PROTEUS Syntactic Analyzer was developed starting in the fall of 1984 as a common base for all th e applications of the PROTEUS Project. Many aspects of its design reflect its heritage in the Linguistic Strin g Parser, previously developed and still in use at New York University. The current system, including the Restriction Language compiler, the lexical analyzer, and the parser proper, comprise approximately 4500 lines of Common Lisp. The Semantic Analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application , extended for the MUCK-II (OPREPs) application, and further revised for the current evaluation. It currently consists of about 3000 lines of Common Lisp (excluding the domain-specific information). The Template Generator was written from scratch for the MUC-3 task; it is about 1200 lines of Commo n Lisp. .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The noun and verb macros automatically generate the regular inflectional forms .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The primary source of our dictionary information about open-class words (nouns, verbs, adjectives, an d adverbs) is the machine-readable version of the Oxford Advanced Learner's Dictionary (\"OALD\") . We have written programs which take the SGML (Standard Generalized Markup Language) version of the dictionary, extrac t information on inflections, parts of speech, and verb subcategorization (including information on adverbial particles and prepositions gleaned from the examples), and generate the LISP-ified form shown above . This is supplemented by a manually-coded dictionary (about 500 lines) for closed-class words and a few very common words .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary File s", "sec_num": null }, { "text": "For MUC-3 we used several additional dictionaries . There was a dictionary (about 800 lines) for Englis h words not defined in the OALD, or not adequately defined or too richly defined there . In addition, we extracted from the text and templates lists of organizations, locations, and proper names, and prepared small dictionaries fo r each (about 2000 lines total) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary File s", "sec_num": null }, { "text": "The text reader splits the input text into tokens and then attempts to assign to each token (or sequence o f tokens, in the case of an idiom) a definition (part of speech and syntactic attributes) . The matching proces s proceeds in four steps: dictionary lookup, lexical pattern matching, spelling correction, and prefix stripping . Dictionary lookup immediately retrieves definitions assigned by any of the dictionaries (including inflected forms) , while lexical pattern matching is used to identify a variety of specialized patterns, such as numbers, dates, times , and possessive forms .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "If neither dictionary lookup nor lexical pattern matching is successful, spelling correction and prefix strippin g are attempted . Based on an analysis of the errors we found, we have used for MUC-3 a rather conservative spelling corrector, which identifies an input token as a misspelled form of a dictionary entry only if one of the two has a single instance of a letter while the other has a doubled instance of the letter (e .g ., \"mispelled\" and \"misspelled\") . 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "The prefix stripper attempts to identify the token as a combination of a prefix and a word defined in the dictionary . We currently use a list of 17 prefixes, including standard English ones like \"un\" and MUC-3 specials like \"narco-\" .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "If all of these procedures fail, the word is tagged as a proper noun (name), since we found that most of our remaining undefined words were names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "In order to avoid full processing of sentences which would make no contribution to the templates, we perform a keyword-based filtering at the sentence level : if a sentence contains no key terms, it is skipped. This filtering is done after lexical analysis because the lexical analysis has identified the root form of all inflected words ; these root forms provide links into the semantic hierarchy . The filtering can therefore be specified in terms of a small number of word classes, one of which must be present for the sentence to be worth processing .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": null }, { "text": "Syntactic analysis involves two stages of processing : parsing and syntactic regularization. At the core of the system is an active chart parser. The grammar is an augmented context-free grammar, consisting of BNF rules plu s procedural restrictions which check grammatical constraints not easily captured in the BNF rules . Most restrictions are stated in PROTEUS Restriction Language (a variant of the language developed for the Linguistic String Parser ) and translated into LISP; a few are coded directly in LISP [1] . For example, the count noun restriction (that singular countable nouns have a determiner) is stated a s", "cite_spans": [ { "start": 517, "end": 520, "text": "[1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "IF BOTH CORE Xcore IS NCOUNT AND Xcore IS SINGULA R THEN IN LN, TPOS IS NOT EMPTY .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WCOUNT = IN LNR AFTER NVAR :", "sec_num": null }, { "text": "Associated with each BNF rule is a regularization rule, which computes the regularized form of each node i n the parse tree from the regularized forms of its immediate constituents . These regularization rules are based on lambda-reduction, as in GPSG . The primary function of syntactic regularization is to reduce all clauses to a standard form consisting of aspect and tense markers, the operator (verb or adjective), and syntactically marked cases . For example, the definition of assertion, the basic S structure in our grammar, i s : := :(s !( )) . Here the portion after the single colon defines the regularized structure .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WCOUNT = IN LNR AFTER NVAR :", "sec_num": null }, { "text": "The more standard corrector we used for MUCK-2, which allowed for any single insertion, deletion, transposition, or substitution, gav e too many incorrect matches .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WCOUNT = IN LNR AFTER NVAR :", "sec_num": null }, { "text": "Coordinate conjunction is introduced by a metarule (as in GPSG), which is applied to the context-free components of the grammar prior to parsing . The regularization procedure expands any conjunction into a conjuntio n of clauses or of noun phrases .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WCOUNT = IN LNR AFTER NVAR :", "sec_num": null }, { "text": "The output of the parser for the first sentence of DEV-0099, \"POLICE HAVE REPORTED THAT TER-RORISTS TONIGHT BOMBED THE EMBASSIES OF THE PRC AND THE SOVIET UNION . \" , is (NVAR (NAMESTG (LNAMER (N \"SOVIET\" \"UNION\")))))))))))))))))))))) ) (ENDMARK (\" .\" \" \")) ) and the corresponding regularized structure is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WCOUNT = IN LNR AFTER NVAR :", "sec_num": null }, { "text": "The system uses a chart parser operating top-down, left-to-right. As edges are completed (i .e., as nodes of the parse tree are built), restrictions associated with those productions are invoked to assign and test features of th e parse tree nodes . If a restriction fails, that edge is not added to the chart . When certain levels of the tree are complete (those producing noun phrase and clause structures), the regularization rules are invoked to compute a regularized structure for the partial parse, and selection is invoked to verify the semantic well-formedness of the structure (as noted earlier, selection uses the same \"semantic analysis\" code subsequently employed to translate the tre e into logical form) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "One unusual feature of the parser is its weighting capability . Restrictions may assign scores to nodes ; the parser will perform a best-first search for the parse tree with the highest score . This scoring is used to implement various preference mechanisms :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "\u2022 closest attachment of modifiers (we penalize each modifier by the number of words separating it from it s head)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "\u2022 preferred narrow conjoining for clauses (we penalize a conjoined clause structure by the number of words i t subsumes)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "\u2022 preference semantics (selection does not reject a structure, but imposes a heavy penalty if the structure doe s not match any lexico-semantic model, and a lesser penalty if the structure matches a model but with som e operands or modifiers left over) [2, 3 ] \u2022 relaxation of certain syntactic constraints, such as the count noun constraint, adverb position constraints, an d comma constraints", "cite_spans": [ { "start": 253, "end": 256, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 257, "end": 260, "text": "3 ]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "\u2022 disfavoring (penalizing) headless noun phrases and headless relatives (this is important for parsin g efficiency)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "The grammar is based on Harris's Linguistic String Theory and adapted from the larger Linguistic Strin g Parser (LSP) grammar developed by Naomi Sager at NYU [4] . The grammar is gradually being enlarged to cove r more of the LSP grammar . The current grammar is 1200 lines of BNF and Restriction Language plus 300 lines of Lisp ; it includes 150 non-terminals, 365 productions, and 103 restrictions .", "cite_spans": [ { "start": 158, "end": 161, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "Over the course of MUC-2 and MUC-3 we have added several mechanisms for recovering from sentence s the grammar cannot fully parse ; these are described in our site report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(S PERF REPORT (SUBJECT (NP POLICE PLURAL (SN NP1227)) ) (OBJEC T (S PAST BOMB (SUBJECT (NP TERRORIST PLURAL (SN NP1238)) ) (OBJEC T (NP EMBASSY PLURAL (SN NP1242) (T-POS THE ) (O F (AND (NP PRC SINGULAR (SN NP1231) (T-POS THE) ) (NP USSR SINGULAR (SN NP1235) (T-POS THE))))) ) (PREP (NP TONIGHT SINGULAR (SN NP1237))))) )", "sec_num": null }, { "text": "The output of syntactic analysis goes through semantic analysis and reference resolution and is then added t o the accumulating logical form for the message . Following both semantic analysis and reference resolution certai n transformations are performed to simplify the logical form . All of this processing makes use of a concept hierarch y which captures the class/subclass/instance relations in the domain .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null }, { "text": "Semantic analysis uses a set of lexico-semantic models to map the regularized syntactic analysis into a semantic representation. Each model specifies a class of verbs, adjectives, or nouns and a set of operands ; for eac h operand it indicates the possible syntactic case markers, the semantic class of the operand, whether or not the operand is required, and the semantic case to be assigned to the operand in the output representation . For example , the model for \" damages \" is (add-clause-model :id 'clause-damage-3 :parent 'clause-an y :constraint 'damag e :operands (list (make-specifie r :marker 'subjec t :class 'explosive-objec t :case :instrument ) (make-specifie r :marker 'objec t :class 'target-entit y :case :patien t :essential-required 'required)) ) The models are arranged in a shallow hierarchy with inheritance, so that arguments and modifiers which are share d by a class of verbs need only be stated once . The model above inherits only from the most general clause model , clause-any, which includes general clausal modifiers such as negation, time, tense, modality, etc . The evaluated MUC-3 system had 98 clause models, 14 nominalization models, and 31 other noun phrase models, a total of about 2000 lines . The class explosive-object in the clause model refers to the concept in the concept hierarchy, whose entries have the form : This merging is blocked if the dates or locations are different, the incident types are incompatible, or the perpetrators are incompatible. Third, a series of filters removes frames involving only military targets and those involvin g events more than two months old . Finally, MUC templates are generated from these frames .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null } ], "back_matter": [ { "text": ":alias (dynamite-charge) ) (defconcept bomb :typeof explosive-objec t :muctype bomb ) (defconcept 'VEHICLE BOMB' :typeof explosive-objec t :muctype 'VEHICLE BOMBI ) (defconcept car-bomb :typeof 'VEHICLE BOMBI ) (defconcept bus-bomb :typeof 'VEHICLE BOMBI ) (defconcept dynamite :typeof explosive-objec t :alias tn t :muctype DYNAMITE ) There are currently a total of 2098 concepts in the hierarchy, of which 1439 are place names .The output of semantic analysis is a nested set of entity and event structures, with arguments labeled by keywords primarily designating semantic roles . For the first sentence of DEV-0099, the output is ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Reference resolution is applied to the output of semantic analysis in order to replace anaphoric noun phrase s (representing either events or entities) by appropriate antecedents . Each potential anaphor is compared to prior entities or events, looking for a suitable antecedent such that the class of the anaphor (in the concept hierarchy) is equal to or more general than that of the antecedent, the anaphor and antecedent match in number, the restrictiv e modifiers in the anaphor have corresponding arguments in the antecedent, and the non-restrictive modifiers (e .g . , apposition) of the anaphor are not inconsistent with those of the antecedent . Special tests are provided for names (people may be referred to a subset of their names) and for referring to groups by typical members (\"terrorist force \" . . . \"terrorists\"). Some further discussion of reference resolution and the subsequent process of template merging i s included in a separate paper on discourse analysis in this volume (\"Computational Aspects of Discourse in the Context of MUC-3\") .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference Resolution", "sec_num": null }, { "text": "The transformations which are applied after semantic analysis and after reference resolution simplify an d regularize the logical form in various ways . For example, if a verb governs an argument of a nominalization, th e argument is inserted into the event created from the nominalization: \"x conducts the attack\", \"x claims responsibility for the attack\", \"x was accused of the attack\" etc . are all mapped to \"x attacks\" (with appropriate settings of th e confidence slot) . For example, the rule to take \"X was accused of Y\" and make X the agent of Y i s (((event :predicate accusation-even t :agent ?agent-1 :event (event :identifier ?id-1 . ?R2 ) . ?R1 ) (event :identifier ?id-1 . ?R4) ) -> ((modify 2 '( :agent ?agent-1 :confidence [SUSPECTED OR ACCUSEDI) ) (delete 1)) ) Transformations are also used to expand conjoined structures . For example, the rule to take \" of ( and )\" and expand it to \"( of ) and ( of )\" i s (((entity :nationality (entity :members (?P1 ?P2) . ?R1) . ?R2) ) -> ((modify 1 '( :nationality ni l :set t :members ((entity :nationality ?P1 :set nil . ?R2 ) (entity :nationality ?P2 :set nil . ?R2)))))) This rule is used in message TST1-0099, for example, to expand THE EMBASSIES OF THE PRC AND THE SOVIET UNION into THE EMBASSY OF THE PRC AND THE EMBASSY OF THE SOVIET UNION .There are currently 32 such rules . These transformations are written as productions and applied using a simple data-driven production system interpreter which is part of the PROTEUS system .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Form Transformations", "sec_num": null }, { "text": "Once all the sentences in an article have been processed through syntactic and semantic analysis, the resulting logical forms are sent to the template generator. The template generator operates in four stages . First, a frame structure resembling a simplified template (with incident-type, perpetrator, physical-target, human-target, date , location, instrument, physical-effect, and human-effect slots) is generated for each event . Date and locatio n expressions are reduced to a normalized form at this point . In particular, date expressions such as \"tonight\", \" las t month\", \"last April\", \"a year ago\", etc . are replaced by explicit dates or date ranges, based on the dateline of th e article. Second, a series of heuristics attempt to merge these frames, mergin g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TEMPLATE GENERATOR", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Preference Semantics for Message Understanding", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1989, "venue": "Proc . DARPA Speech an d Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J . Preference Semantics for Message Understanding . Proc . DARPA Speech an d Natural Language Workshop, Morgan Kaufman, 1990 (proceedings of the conference at Harwich Port, MA, Oct . 15-18, 1989) .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Information Extraction and Semantic Constraints", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1990, "venue": "Proc. 13th Int' l Conf. Computational Linguistics (COLING 90)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J ., Information Extraction and Semantic Constraints . Proc. 13th Int' l Conf. Compu- tational Linguistics (COLING 90), Helsinki, August 20-25, 1990 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "SPONSORSHIP The development of the entire PROTEUS system has been sponsored primarily by the Defense Advance d Research Projects Agency as part of the Strategic Computing Program, under Contract N00014-85-K-0163 an d Grant N00014-90-J-1851 from the Office of Naval Research . Additional support has been received from th e National Science Foundation under grant DCR-85-01843 for work on enhancing system robustness", "authors": [ { "first": "N", "middle": [], "last": "Sager", "suffix": "" } ], "year": 1981, "venue": "Natural Language Information Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, N . Natural Language Information Processing, Addison-Wesley, 1981 . SPONSORSHIP The development of the entire PROTEUS system has been sponsored primarily by the Defense Advance d Research Projects Agency as part of the Strategic Computing Program, under Contract N00014-85-K-0163 an d Grant N00014-90-J-1851 from the Office of Naval Research . Additional support has been received from th e National Science Foundation under grant DCR-85-01843 for work on enhancing system robustness .", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Structure of the Proteus System as used for MUC-VERB :ROOT \"ABSCOND\" :OBJLIST (NULLOBJ PN (PVAL (FROM WITH))) )" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "SUBJECT (NSTG (LNR (NVAR (N \"POLICE\")))) ) (VERB (LTVR (TV \"HAVE\")) ) (OBJEC T (VENO (LVENR (VEN \"REPORTED\") ) (OBJEC T (THATS (\"THAT\" \"THAT\" ) (ASSERTION (SUBJECT (NSTG (LNR (NVAR (N \"TERRORISTS\")))) ) (SA (SA-VAL (NSTGT (NSTG (LNF : (NVAR (N \"TONIGHT\")))))) ) (VERB (LTVR (TV \"BOMBED\")) ) (OBJEC T (NSTGO (NST G (LNR (LN (TPOS (LTR (T \"THE\")))) (NVAR (N \"EMBASSIES\") ) (RN (RN-VA L (PN (P \"OF\" ) (NSTG O (NST G (LN R (LNR (LN (TPOS (LTR (T \"THE\"))) ) (NVAR (NAMESTG (LNAMER (N \"PRC\")))) ) (CONJ-WORD (\"AND\" \"AND\") ) (LNR (LN (TPOS (LTR (T \"THE\"))) )" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "following an attack frame (e .g ., \"The FMLN attacked the town. Seven civilians died . \" )" } } } }