{ "paper_id": "M92-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:13.666748Z" }, "title": "NEW YORK UNIVERSITY DESCRIPTION OF THE PROTEUS SYSTEM AS USED FOR MUC-4", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "", "affiliation": {}, "email": "grishman@cs.nyu.edu" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "", "affiliation": {}, "email": "macleod@cs.nyu.edu" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "", "affiliation": {}, "email": "sterling@cs.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The PROTEUS system which we have used for MUC-4 is largely unchanged from that used for MUC-3. It has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The PROTEUS Syntactic Analyzer was developed starting in the fall of 1984 as a common base for all th e applications of the PROTEUS Project. Many aspects of its design reflect its heritage in the Linguistic Strin g Parser, previously developed and still in use at New York University. The current system, including the Restriction Language compiler, the lexical analyzer, and the parser proper, comprise approximately 4500 lines of Common Lisp. The Semantic Analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application , extended for the MUCK-II (OPREPs) application, and has been incrementally revised since. It currently consist s of about 3000 lines of Common Lisp (excluding the domain-specific information). The Template Generator was written from scratch for the MUC-3 task and then revised for the MUC-4 templates; it is about 1200 lines of Common Lisp..", "pdf_parse": { "paper_id": "M92-1032", "_pdf_hash": "", "abstract": [ { "text": "The PROTEUS system which we have used for MUC-4 is largely unchanged from that used for MUC-3. It has three main components : a syntactic analyzer, a semantic analyzer, and a template generator. The PROTEUS Syntactic Analyzer was developed starting in the fall of 1984 as a common base for all th e applications of the PROTEUS Project. Many aspects of its design reflect its heritage in the Linguistic Strin g Parser, previously developed and still in use at New York University. The current system, including the Restriction Language compiler, the lexical analyzer, and the parser proper, comprise approximately 4500 lines of Common Lisp. The Semantic Analyzer was initially developed in 1987 for the MUCK-I (RAINFORMs) application , extended for the MUCK-II (OPREPs) application, and has been incrementally revised since. It currently consist s of about 3000 lines of Common Lisp (excluding the domain-specific information). The Template Generator was written from scratch for the MUC-3 task and then revised for the MUC-4 templates; it is about 1200 lines of Common Lisp..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The text goes through the five major stages of processing : lexical analysis, syntactic analysis, semantic analysis, reference resolution, and template generation (see Figure 1 ). In addition, some restructuring of the logical form is performed both after semantic analysis and after reference resolution (only the restructuring after referenc e resolution is shown in Figure 1 ) . Processing is basically sequential: each sentence goes through lexical, syntactic , and semantic analysis and reference resolution ; the logical form for the entire message is then fed to template generation . However, semantic (selectional) checking is performed during syntactic analysis, employing essentiall y the same code later used for semantic analysis .", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 176, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 369, "end": 377, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "STAGES OF PROCESSING", "sec_num": null }, { "text": "Each of these stages is described in a section which follows .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STAGES OF PROCESSING", "sec_num": null }, { "text": "Our dictionaries contain only syntactic information : the parts of speech for each word, information about the complement structure of verbs, distributional information (e .g ., for adjectives and adverbs), etc . We follow closely the set of syntactic features established for the NYU Linguistic String Parser . This information is entered in LISP form using noun, verb, adjective, and adverb macros for the open-class words, and a word macro for other parts of speech: (ADVERB \"ABRUPTLY\" :ATTRIBUTES (DSA) ) (ADJECTIVE \"ABRUPT\" ) ( ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Format", "sec_num": null }, { "text": "The primary source of our dictionary information about open-class words (nouns, verbs, adjectives, an d adverbs) is the machine-readable version of the Oxford Advanced Learner's Dictionary (\"OALD\") . We have written programs which take the SGML (Standard Generalized Markup Language) version of the dictionary, extrac t information on inflections, parts of speech, and verb subcategorization (including information on adverbial particles and prepositions gleaned from the examples), and generate the LISP-ified form shown above. This is supplemented by a manually-coded dictionary (about 1500 lines, 900 entries) for closed-class words, words no t adequately defined in the OALD, and a few very common words .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Files", "sec_num": null }, { "text": "For MUC-4 we used several additional dictionaries. There was a dictionary (about 900 lines) for domainspecific English words not defined in the OALD, or too richly defined there . In addition, we extracted from the tex t and templates lists of organizations, locations, and proper names, and prepared small dictionaries for each (abou t 2500 lines total) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Files", "sec_num": null }, { "text": "The text reader splits the input text into tokens and then attempts to assign to each token (or sequence o f tokens, in the case of an idiom) a definition (part of speech and syntactic attributes) . The matching process proceeds in four steps: dictionary lookup, lexical pattern matching, spelling correction, and prefix stripping . Dictionary lookup immediately retrieves definitions assigned by any of the dictionaries (including inflected forms) , while lexical pattern matching is used to identify a variety of specialized patterns, such as numbers, dates, times , and possessive forms .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "If neither dictionary lookup nor lexical pattern matching is successful, spelling correction and prefix strippin g are attempted. For words of any length, we identify an input token as a misspelled form of a dictionary entry if on e of the two has a single instance of a letter while the other has a doubled instance of the letter (e .g ., \"mispelled\" and \"misspelled\") . For words of 8 or more letters, we use a more general spelling corrector which allows for any singl e insertion, deletion, or substitution. [ The prefix stripper attempts to identify the token as a combination of a prefix and a word defined in the dictionary. We currently use a list of 17 prefixes, including standard English ones like \"un\" and MUC-3/MUC-4 specials like \"narco-\" .", "cite_spans": [ { "start": 512, "end": 513, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "If all of these procedures fail, the word is tagged as a proper noun (name), since we found that most of ou r remaining undefined words were names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "For MUC-4, we have incorporated the stochastic part-of-speech tagger from BBN in order to assign probabilities to each part-of-speech assigned by the lexical analyzer. The log probabilities are used as scores, and combined with other scores to determine the overall score of each parsing hypothesis .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Looku p", "sec_num": null }, { "text": "In order to avoid full processing of sentences which would make no contribution to the templates, we perform a keyword-based filtering at the sentence level : if a sentence contains no key terms, it is skipped . This filtering is done after lexical analysis because the lexical analysis has identified the root form of all inflected words ; these root forms provide links into the semantic hierarchy . The filtering can therefore be specified in terms of a small number of word classes, one of which must be present for the sentence to be worth processing .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtering", "sec_num": null }, { "text": "Syntactic analysis involves two stages of processing : parsing and syntactic regularization . At the core of th e system is an active chart parser. The grammar is an augmented context-free grammar, consisting of BNF rules plus procedural restrictions which check grammatical constraints not easily captured in the BNF rules . Most restrictions are stated in PROTEUS Restriction Language (a variant of the language developed for the Linguistic String Parser) and translated into LISP ; a few are coded directly in LISP [1] . For example, the count noun restriction (that singular countable nouns have a determiner) is stated as WCOUNT = IN LNR AFTER NVAR :", "cite_spans": [ { "start": 518, "end": 521, "text": "[1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "IF BOTH CORE Xcore IS NCOUNT AND Xcore IS SINGULA R THEN IN LN, TPOS IS NOT EMPTY .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "Associated with each BNF rule is a regularization rule, which computes the regularized form of each node i n the parse tree from the regularized forms of its immediate constituents . These regularization rules are based on lambda-reduction, as in GPSG . The primary function of syntactic regularization is to reduce all clauses to a standard form consisting of aspect and tense markers, the operator (verb or adjective), and syntactically marked cases .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "' The minimum word length requirement is needed to avoid false hits where proper names are incorrectly identified as misspellings of words defined in the dictionary. For example, the definition of assertion, the basic S structure in our grammar, i s . ._ :(s !( )) . Here the portion after the single colon defines the regularized structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "Coordinate conjunction is introduced by a metarule (as in GPSG), which is applied to the context-free components of the grammar prior to parsing . The regularization procedure expands any conjunction into a conjuntio n of clauses or of noun phrases .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "The output of the parser for the first sentence of TST2-0048, \"SALVADORAN PRESIDENT-ELEC T ALFREDO CRISTIANI CONDEMNED THE TERRORIST KILLING OF ATTORNEY GENERAL ROBERTO GARCIA ALVARADO AND ACCUSED THE FARABUNDO MARTI NATIONAL LIBERATION N \"FMLN\") ))))) (' .')\" \")\")))))) ) (PN (P \"OF\" ) (NSTGO (NSTG (LNR (LN (TPOS (LTR (T \"THE\")))) (NVAR (N \"CRIME\"))))))))))) ) (ENDMARK (\" .\" \" .\")) ) and the corresponding regularized structure is (AND ", "cite_spans": [ { "start": 226, "end": 236, "text": "LIBERATION", "ref_id": null } ], "ref_spans": [ { "start": 237, "end": 246, "text": "N \"FMLN\")", "ref_id": null } ], "eq_spans": [], "section": "SYNTACTIC ANALYSI S", "sec_num": null }, { "text": "The system uses a chart parser operating top-down, left-to-right . As edges are completed (i.e., as nodes of the parse tree are built), restrictions associated with those productions are invoked to assign and test features of th e parse tree nodes . If a restriction fails, that edge is not added to the chart . When certain levels of the tree are complete (those producing noun phrase and clause structures), the regularization rules are invoked to compute a regularized structure for the partial parse, and selection is invoked to verify the semantic well-formedness of the structure (as noted earlier, selection uses the same \"semantic analysis\" code subsequently employed to translate the tre e into logical form).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "One unusual feature of the parser is its weighting capability . Restrictions may assign scores to nodes ; th e parser will perform a best-first search for the parse tree with the highest score . This scoring is used to implement various preference mechanisms:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "\u2022 closest attachment of modifiers (we penalize each modifier by the number of words separating it from it s head)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "\u2022 preferred narrow conjoining for clauses (we penalize a conjoined clause structure by the number of words i t subsumes)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "\u2022 preference semantics (selection does not reject a structure, but imposes a heavy penalty if the structure doe s not match any lexico-semantic model, and a lesser penalty if the structure matches a model but with som e operands or modifiers left over) [ \u2022 allowing the grammar to skip a single word, or a series of words enclosed in parentheses or dashes, with a large score penalty \u2022 if no parse is obtained for the entire sentence, taking the analysis which, starting at the first word, subsumes the most words", "cite_spans": [ { "start": 253, "end": 254, "text": "[", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "\u2022 optionally, taking the remainder of the sentence and \"covering\" it with noun phrases and clauses, preferrin g the longest noun phrases or clauses which can be identifie d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OF (NP CRIME SINGULAR (SN NP1543) (T-POS THE)))) )", "sec_num": null }, { "text": "The output of syntactic analysis goes through semantic analysis and reference resolution and is then added t o the accumulating logical form for the message . Following both semantic analysis and reference resolution certai n transformations are performed to simplify the logical form . All of this processing makes use of a concept hierarch y which captures the class/subclass/instance relations in the domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null }, { "text": "Semantic analysis uses a set of lexico-semantic models to map the regularized syntactic analysis into a semantic representation. Each model specifies a class of verbs, adjectives, or nouns and a set of operands ; for eac h operand it indicates the possible syntactic case markers, the semantic class of the operand, whether or not th e operand is required, and the semantic case to be assigned to the operand in the output representation . For example, the model for \" damages \" i s (add-clause-model :id 'clause-damage-3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null }, { "text": ":parent 'clause-an y :constraint 'damage :operands (list (make-specifie r :marker 'subjec t :class 'explosive-objec t :case :instrument ) (make-specifie r :marker 'objec t :class 'target-entit y :case :patien t :essential-required 'required)) ) The models are arranged in a shallow hierarchy with inheritance, so that arguments and modifiers which are shared by a class of verbs need only be stated once. The model above inherits only from the most general clause model , clause-any, which includes general clausal modifiers such as negation, time, tense, modality, etc . The evaluated MUC-4 system had 124 clause models, 21 nominalization models, and 39 other noun phrase models, a total of about 2500 lines . The class explosive-object in the clause model refers to the concept in the concept hierarchy, whose entries have the form : ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null }, { "text": "(defconcept", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEMANTIC ANALYSIS AND REFERENCE RESOLUTIO N", "sec_num": null }, { "text": "Reference resolution is applied to the output of semantic analysis in order to replace anaphoric noun phrase s (representing either events or entities) by appropriate antecedents . Each potential anaphor is compared to prior entities or events, looking for a suitable antecedent such that the class of the anaphor (in the concept hierarchy) i s equal to or more general than that of the antecedent, the anaphor and antecedent match in number, the restrictiv e modifiers in the anaphor have corresponding arguments in the antecedent, and the non-restrictive modifiers (e .g . , apposition) of the anaphor are not inconsistent with those of the antecedent . Special tests are provided for names (people may be referred to a subset of the i r names) and for referring to groups by typical members (\"terrorist force\" . .. \"terrorists\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference Resolution", "sec_num": null }, { "text": "The transformations which are applied after semantic analysis and after reference resolution simplify an d regularize the logical form in various ways . For example, if a verb governs an argument of a nominalization, th e argument is inserted into the event created from the nominalization : \"x conducts the attack\", \"x claims responsibility for the attack\", \"x was accused of the attack\" etc . are all mapped to \"x attacks\" (with appropriate settings of th e confidence slot) . For example, the rule to take \"X was accused of Y\" and make X the agent of Y i s (((event :predicate accusation-even t :agent ?agent-1 :event (event :identifier ?id-1 . ?R2 ) . ?R1 ) (event :identifier ?id-1 . ?R4) ) -> ((modify 2 '( :agent ?agent-1 :confidence 'SUSPECTED OR ACCUSEDI) ) (delete 1)) ) Transformations are also used to expand conjoined structures . For example, there is a rule to expand \"the towns o f x and y \" into \"the town of x and the town of y\", and there is a rule to expand \"event at location-1 and location- 2 \" into \"event at location-1 and event at location-2\" .", "cite_spans": [ { "start": 1013, "end": 1016, "text": "2 \"", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Logical Form Transformation s", "sec_num": null }, { "text": "There are currently 32 such rules. These transformations are written as productions and applied using a simple data-driven production system interpreter which is part of the PROTEUS system .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Logical Form Transformation s", "sec_num": null }, { "text": "Once all the sentences in an article have been processed through syntactic and semantic analysis, the resulting logical forms are sent to the template generator . The template generator operates in four stages . First, a frame structure resembling a simplified template (with incident-type, perpetrator, physical-target, human-target, date , location, instrument, physical-effect, and human-effect slots) is generated for each event . Date and location expressions are reduced to a normalized form at this point. In particular, date expressions such as \"tonight\", \"last month\", \" last April\", \"a year ago\", etc. are replaced by explicit dates or date ranges, based on the dateline of th e article. Second, a series of heuristics attempt to merge these frames, mergin g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TEMPLATE GENERATO R", "sec_num": null }, { "text": "\u2022 frames referring to a common target \u2022 frames arising from the same sentenc e \u2022 an effect frame following an attack frame (e.g ., \"The FMLN attacked the town . Seven civilians died .\" ) This merging is blocked if the dates or locations are different, the incident types are incompatible, or the perpetrators are incompatible . Third, a series of filters removes frames involving only military targets and those involvin g events more than two months old . Finally, MUC templates are generated from these frames .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TEMPLATE GENERATO R", "sec_num": null } ], "back_matter": [ { "text": "The development of the entire PROTEUS system has been sponsored primarily by the Defense Advanced Research Projects Agency as part of the Strategic Computing Program, under Contract N00014-85-K-0163 an d Grant N00014-90-J-1851 from the Office of Naval Research . Additional support has been received from th e National Science Foundation under grant DCR-85-01843 for work on enhancing system robustness .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SPONSORSHIP", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Preference Semantics for Message Understanding", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1989, "venue": "Proc . DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J. Preference Semantics for Message Understanding. Proc . DARPA Speech and Natural Language Workshop, Morgan Kaufman, 1990 (proceedings of the conference at Harwich Port, MA, Oct . 15-18, 1989) .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Information Extraction and Semantic Constraints", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1990, "venue": "Proc . 13th Int' I Conf Computational Linguistics (COLING 90)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R ., and Sterling, J ., Information Extraction and Semantic Constraints . Proc . 13th Int' I Conf Compu- tational Linguistics (COLING 90), Helsinki, August 20-25, 1990 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural Language Information Processing", "authors": [ { "first": "N", "middle": [], "last": "Sager", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, N . Natural Language Information Processing, Addison-Wesley, 1981 .", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "NOUN :ROOT \"ABSCESS\" :ATTRIBUTES (NCOUNT) ) Structure of the Proteus System as used for MUC-4 (VERB :ROOT \"ABSCOND\" :OBJLIST (NULLOBJ PN (PVAL (FROM WITH))) ) The noun and verb macros automatically generate the regular inflectional forms .", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "FRON T (FMLN) OF THE CRIME.\", i s NPOSVAR (LCDN (ADJ \"SALVADORAN\")) (N \"PRESIDENT\" \"-\" \"ELECT\"))) ) (NVAR (NAMESTG (LNAMER (N \"ALFREDO\") (MORENAME (N \"CRISTIANI\"))))))) ) (VERB (LTVR (TV \"CONDEMNED\")) ) (OBJECT (NSTGO (NSTG (LNR (LN (TPOS (LTR (T \"THE\"))) (NPOS (NPOSVAR (N \"TERRORIST\"))) ) (NVAR (N \"KILLING\") ) (RN (RN-VA L (PN (P \"OF\" ) (NSTGO (NST G (LNR (LN (NPOS (NPOSVAR (N \"ATTORNEY\" \"GENERAL\"))) ) (NVA R (NAMEST G (LNAMER (N \"ROBERTO\" ) (MORENAME (N \"GARCIA\") (MORENAME (N \"ALVARADO\")))))))))))))))) ) (CONJ-WORD (\"AND\" \"AND\") ) NPOSVAR (LCDN (ADJ \"SALVADORAN\")) (N \"PRESIDENT\" \"-\" \"ELECT\"))) ) (NVAR (NAMESTG (LNAMER (N \"ALFRED O \" ) (MORENAME (N \"CRISTIANI\"))))))) ) (VERB (LTVR (TV \"ACCUSED\")) ) N \"FARABUNDO\" \"MARTI\" \"NATIONAL\" \"LIBERATION\" \"FRONT\") ) (NAME-APPOS (\"(\" \"(\" ) (NSTG (LNR (NVAR (NAMESTG (LNAMER (", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "(S CONDEMN (VTENSE PAST ) (SUBJEC T (NP A-NAME SINGULAR (NAMES (ALFREDO CRISTIANI)) (SN NP1499 ) (N-POS (NP PRESIDENT-ELECT SINGULAR (SN NP1489) (A-POS SALVADORAN)))) ) (OBJEC T (NP KILLING SINGULAR (SN NP1532) (T-POS THE ) (N-POS (NP TERRORIST SINGULAR (SN NP1504)) ) -NAME SINGULAR (NAMES (ALFREDO CRISTIANI)) (SN NP1499 ) (N-POS (NP PRESIDENT-ELECT SINGULAR (SN NP1489) (A-POS SALVADORAN)))) ) (OBJEC T (NP FMLN SINGULAR (RN-APPOS (NP FMLN SINGULAR (SN NP1539))) (SN NP1544 ) (T-POS THE)) ) (", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "There are currently a total of 2474 concepts in the hierarchy, of which 1734 are place names .The output of semantic analysis is a nested set of entity and event structures, with arguments labeled by keywords primarily designating semantic roles . For the first sentence of TST2-0048, the output i s", "num": null, "type_str": "figure" }, "TABREF0": { "num": null, "type_str": "table", "html": null, "text": "Linguistic String Theory and adapted from the larger Linguistic Strin g Project (LSP) grammar developed by Naomi Sager at NYU[4] . The grammar is gradually being enlarged to cove r more of the LSP grammar. The current grammar is 1600 lines of BNF and Restriction Language plus 300 lines o f Lisp; it includes 186 non-terminals, 464 productions, and 132 restrictions .Over the course of the MUCs we have added several mechanisms for recovering from sentences the grammar cannot fully parse :", "content": "
2,3 ]
\u2022relaxation of certain syntactic constraints, such as the count noun constraint, adverb position constraints, and
comma constraint s
\u2022disfavoring (penalizing) headless noun phrases and headless relatives (this is important for parsing
efficiency)
The grammar is based on Harris's
" } } } }