{ "paper_id": "A92-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:03:47.143651Z" }, "title": "Robust Processing of Real-World Natural-Language Texts", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Douglas", "middle": [ "E" ], "last": "Appelt", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Mabry", "middle": [], "last": "Tyson", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.", "pdf_parse": { "paper_id": "A92-1026", "_pdf_hash": "", "abstract": [ { "text": "I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "If automatic text processing is to be a useful enterprise, it. must be demonstrated that the completeness and accuracy of the information extracted is adequate for the application one has in nfind. While it is clear that certain applications require only a minimal level of competence from a system, it is also true that many applicationsrequire a very high degree of completeness and a.ccuracy in text processing, and an increase in capability in either area is a clear advantage. Therefore we adopt an extremely lfigh standard against which the performance of a text processing system should be measured: it. should recover all information that is implicitly or explicitly present in the text, and it should do so without making mistakes. Tiffs standard is far beyond the state of the art. It is an impossibly high standard for human beings, let alone machines. However, progress toward adequate text processing is best. served by setting ambitious goals. For this reason we believe that, while it may be necessary in the intermediate term to settle for results that are far short of this ultimate goal, any linguistic theory or system architecture that is adopted should not be demonstra-bly inconsistent with attaining this objective. However, if one is interested, as we are, in the potentially successful application of these intermediate-term systems to real problems, it is impossible to ignore the question of whether they can be made efficient enough and robust enough for application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The TACITUS text processing system has been under development at SRI International for the last six years. This system has been designed as a first step toward the realization of a system with very high completeness and accuracy in its ability to extract information from text. The general philosophy underlying the design oI this system is that the system, to the maximum extent possible, should not discard any information that might be semantically or pragmatically relevant to a full, correct interpretation. The effect of this design philosophy on the system architecture is manifested in the following characteristics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "* TACITUS relies on a large, comprehensive lexicon containing detailed syntactic subcategorization information for each lexieal item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": ". TACITUS produces a parse and semantic interpretation of each sentence using a comprehensive grammar of English in which different possible predicateargument relations are associated with different syntactic structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 TACITUS relies on a general abductive reasoning meclmnism to uncover the implicit assumptions necessary to explain the coherence of the explicit text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "These basic design decisions do not by themselves distinguish TACITUS from a number of other naturallanguage processing systems. However, they are somewhat controversial given the intermediate goal of producing systems that are useful for existing applications Criticism of the overall design with respect to this goal centers on the following observations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 The syntactic structure of English is very complex and no grammar of English has been constructed that has complete coverage of the syntax one encounters in real-world texts. Much of the text thai needs to be processed will lie outside the scope ol the best grammars available, and therefore canno! be understood by a. system that relies on a complet( syntactic analysis of each sentence as a prerequisite to other processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 Typical sentences in newspaper articles are about 25-30 words in length. Many sentences are much longer. Processing strategies that rely on producing a complete syntactic analysis of such sentences will be faced with a combinatorially intractable task, assuming in the first place that the sentences lie within the language described by the grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 Any grammar that successfully accounts for the range of syntactic structures encountered in realworld texts will necessarily produce many alnbigt> ous analyses of most sentences. Assuming that the system can find the possible analyses of a sentence in a reasonable period of time, it is still faced with the problem of choosing the correct one from the many competing ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "Designers of application-oriented text processing systems have adopted a number of strategies for (lea.ling with these problems. Such strategies involve deemphasizing the role of syntactic analysis (Jacobs et al., 1991) , producing partial parses with stochastic or heuristic parsers (de Marcken, 1990; Weischedel et al 1991) or resorting to weaker syntactic processing methods such as conceptual or case-frame based parsing (e.g., Schank and Riesbeck, 1981) or template matching techniques (Jackson et M., 1991). A common feature shared by these weaker methods is that they ignore certain information that is present in the text, which could be extracted by a more comprehensive analysis. The information that is ignored may be irrelevant to a particular application, or relevant in only an insignificant handful of cases, and thus we cannot argue that approaches to text processing based on weak or even nonexistent syntactic and semantic analysis are doomed to failure in all cases and are not worthy of further investigation. However, it is not obvious how such methods can scale up to handle fine distinctions in attachment, scoping, and inference, although some recent attempts have been made in this direction (Cardie and Lehnert, 1991b) .", "cite_spans": [ { "start": 198, "end": 219, "text": "(Jacobs et al., 1991)", "ref_id": "BIBREF6" }, { "start": 432, "end": 458, "text": "Schank and Riesbeck, 1981)", "ref_id": null }, { "start": 1217, "end": 1244, "text": "(Cardie and Lehnert, 1991b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "In the development of TACITUS, we have chosen a design philosophy that assumes that a complete and accurate analysis of the text is being undertaken. In this paper we discuss how issues of robustness are approached from this general design perspective. In particular, we demonstrate that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 useful partial analyses of the text can be obtained in cases in which the text is not grammatical English, or lies outside the scope of the grammar's coverage,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 substantially correct parses of sentences can be found without exploring the entire search space for each sentence,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 useful pragmatic interpretations can be obtained using general reasoning methods, even in cases in which the system lacks the necessary world knowledge to resolve all of the pragmatic problems posed in a sentence, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "\u2022 all of this processing can be done within acceptable bounds on computational resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "Our experience with TACITUS suggests that extension of the system's capabilities to higher levels of completeness and accuracy can be achieved through incremental modifications of the system's knowledge, lexicon and grammar, while the robust processing techniques discussed in the following sections make the system usable for intermediate term applications. We have eva.luated the success of the various techniques discussed here, and conclude fi'om this eva.hlation that TAC1TUS offers substantiatioll of our claim that a text. processing system based on principles of complete syntactic, semantic and pragmatic analysis need not. be too brittle or computationally expensive for practical applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The TACITUS System", "sec_num": "1.1" }, { "text": "SRI International participated in the recent M UC,-3 evaluation of text-understanding systems (Sundheim, 1991). The methodolpgy chosen for this evaluation was to score a system's ability to fill in slots in tenlplates s,nnmarizing the content of short (approximately 1 page) newspaper articles on Latin American terrorism. The templatefilling task required identifying, among other things, the perpetrators and victims of each terrorist act described in the articles, the occupation of the victims, the typ~ of physical entity attacked or destroyed, the date, tile location, and the effect on the targets. Frequently, articles described multiple incidents, while other texts were completely irrelevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the System", "sec_num": "1.2" }, { "text": "A set of 1,300 such newspaper articles was selected on the basis of the presence of keywords in the text, and given to participants as training data. Several hundred texts from the corpus were withheld for various phases of testing. Participants were scored on their ability to fill the templates correctly. Recall and precision measures were computed as an objective performance evaluation metric. Variations in computing these metrics are possible, but intuitively understood, recall measures the percentage of correct fills a system finds (ignoring wrong and spurious answers), and precision measures the percentage of correct fills provided out of the total number of answers posited. Thus, recall measures the completeness of a system's ability to extract information from a text, while precision measures it's accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the System", "sec_num": "1.2" }, { "text": "The TACITUS system achieved a recall of 44% with a precision of 65% on templates for events correctly identiffed, and a recall of 25% with a precision of 48% on all templates, including spurious templates the system generated. Our precision was the highest among the participating sites; our recall was somewhere in tile middle. Although pleased with these overall results, a subsequent detailed analysis of our performance on the first 20 messages of the 100-message test set is much more illuminating for evaluating the success of the particulax robust processing strategies we have chosen. In the remainder of this paper, we discuss the impact of the robust processing methods in the Tight of this detailed analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the System", "sec_num": "1.2" }, { "text": "Robust syntactic analysis requires a very large grammar and means for dealing with sentences that do not parse, whether because they fall outside the coverage of the grammar or because they are too long for the parser. The gral-nnaar used in TACITUS is that of the I)IAI~OCIC system, deweloped in 1980-81 essentially by constructing the union of the Linguistic String Project G'ranmmr (Sager, 1981) and tile DIAGP~AM grammar (Robinson, 1982) which grew out of SRI's Speech Un-&:rst.anding System research in the 1970s. Since that t.imc il. has been consid~'l'ably enhanced. It consists of about 160 phrase structure rules. Associated with each rule is a \"constructor\" expressing the constraints on the applicability of that rule, and a \"translator\" for producing the logical form. The grannnar is comprehensive and includes subcategorization, sentential complements, adverbials, relative clauses, complex determiners, the most common varieties of conjnnction and comparisou, selectional constraints, some coreference resolution, and the most common sentence fra.gments. The parses are ordered according to heuristics encoded in the grammar.", "cite_spans": [ { "start": 385, "end": 398, "text": "(Sager, 1981)", "ref_id": "BIBREF11" }, { "start": 425, "end": 441, "text": "(Robinson, 1982)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "The parse tree is translated into a logical representation of the nleaning of the sentence, encoding predicateargument relations and grammatical subordination relations. In addition, it regularizes to some extent the role assignments in the predicate-argument structure, and handles argnments inherited from control verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "Our lexicon includes about 20,000 entries, including about 2000 personal names and about 2000 location, organization, or other names. This number does not include morphological variants, which are handled in a separate naorphological analyzer. (In addition, there are special procedures for handling unknown words, including unknown names, described in Hobbs et al., 1991.) The syntactic analysis component was remarkably successful in the MUC,-3 evaluation. This was due primarily to three innovations.", "cite_spans": [ { "start": 353, "end": 373, "text": "Hobbs et al., 1991.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 An agenda-based scheduling chart parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 A recovery heuristic for unparsable sentences that found the best sequence of gramnmtical fragments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 The use of \"ternfina.l substring parsing\" for very long sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "Each of these techniques will be described in turn, with statistics on their i)erformance in the MUC-a evaluation. Bottom-up parsing is favored for its robustness, and this robustness derives from the fact that a bottom-up parser will construct nodes and edges in the chart that a parser with top-down prediction would not. The obvious problem is that these additional nodes do not come without an associated cost. Moore and Dowding (1991) observed a ninefold increase ill time required to parse sentences with a straightforward C, KY parser as opposed to a shift-reduce parser. Prior to November 1990, TAC-ITUS employed a simple, exhaustive, bottom-up parser with the result that sentences of more than 15 to 20 words were impossible to parse in reasonable time. Since the average length of a sentence in the MUC-3 texts is approximately 25 words, such techniqnes were clearly inappropriate for the application.", "cite_spans": [ { "start": 415, "end": 439, "text": "Moore and Dowding (1991)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "We addressed this problem by adding an agenda mechanism to the bottom-up parser, based on Kaplan (1973) , as described in Winograd (1983) . The purpose of the agenda is to allow us to order nodes (complete constituents) and edges (incomplete constituents) in the chart for further processing. As nodes and edges are built, they are rated according to various criteria for how likely they are to figure in a correct parse. This allows us to schedule which constituents to work with first so that we can pursue only the most likely paths in the search space and find a parse without exhaustively trying all possibilities. The scheduling algorithm is simple: explore the ramifications of the highest scoring constituents first.", "cite_spans": [ { "start": 90, "end": 103, "text": "Kaplan (1973)", "ref_id": "BIBREF7" }, { "start": 122, "end": 137, "text": "Winograd (1983)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "In addition, there is a facility for pruning the search space. The user can set limits on the number of nodes and edges that are allowed to be stored in the chart. Nodes are indexed on their atomic grammatical category (i.e., excluding features) and the string position at which they begin. Edges are indexed on their atomic grammatical category and tim string position where they end. The algorithm for pruning is simple: Throw away all but the n highest scoring constituents for each category/string-position pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "It has often been pointed out that various standard parsing strategies correspond to various scheduling strategies in an agenda-based parser. However, in practical parsing, what is needed is a scheduling strategy that enables us to pursue only the most likely paths in the search space and to find the correct parse without exhaustively trying all possibilities. The literature has not been as ilhnninating on this issue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "We designed our parser to score each node and edge on the basis of three criteria:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 The length of the substring spanned by the constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 Whether the constituent is a node or an edge, that is, whether the constituent is complete or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "\u2022 The scores derived from the preference heuristics that have been encoded in DIALOGIC over the years, described and systematized in Hobbs and Bear (1990) .", "cite_spans": [ { "start": 133, "end": 154, "text": "Hobbs and Bear (1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "However, after considerable experimentation with var-ious weightings, we concluded that tile length and completeness factors failed to improve the performance a.t all over a broad range of sentences. Evidence suggested that a score based on preference factor alone produces the best results. The reason a correct or nearly correct parse is found so often by this method is that these preference heuristics are so effective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "In the frst 20 messages of the test set., 131 sentences were given to the scheduling parser, after statistically based relevance filtering. A parse was produced for 81 of the 131 sentences, or 62%. Of these, 4:3 (or 33%) were completely correct, and 30 more had three or fewer errors. Thus, 56% of the sentences were parsed correctly or nearly correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "These results naturally vary depending oil the length of the sentences. There were 64 sentences of under 30 naorphemes (where by \"morpheme\" we mean words plus inflectional affixes). Of these, 37 (58%) had completely correct parses and 48 (75%) had three or fewer errors. By contrast, the scheduling parser attempted only 8 sentences of more than 50 morphemes, and only two of these parsed, neither of them even nearly correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "Of the 44 sentences that would not parse, nine were due to problems in lexical entries. Eighteen were due to shortcomings in the grammar, primarily involving adverbial placement and less than fully general treatment of conjunction and comparatives. Six were due to garbled text. The causes of eleven failures to parse have not been determined. These errors are spread out evenly across sentence lengths. In addition, seven sentences of over 30 lnorphemes hit the time limit we had set, and terminal substring parsing, as described below, was invoked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "A majority of the errors in parsing can be attributed to five or six causes. Two prominent causes are the tendency of the scheduling parser to lose favored close attachments of conjuncts and adjuncts near the end of long sentences, and the tendency to misanalyze the string We believe that most of these problems are due to the fact that the work of the scheduling parser is not distributed evenly enough across the different parts of the sentence, and we expect that this difficulty could be solved with relatively little effort.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "Our results in syntactic analysis are quite encouraging since they show that a high proportion of a corpus of long and very complex sentences can be parsed nearly correctly. However, the situation is even better when one considers the results for the best-fragment-sequence heuristic and for terminal substring parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysis", "sec_num": "2" }, { "text": "When a sentence does not parse, we attempt to span it with the longest, best sequence of interpretable fragments. The fragments we look for are main clauses, verh phrases, adverbial phrases, and noun phrases. They are chosen on the basis of length and their preference scores, favoring length over preference score. We do not attempt to find fragments for strings of less than five morphemes. The effect of this heuristic is that even for sentences that do not parse, we are able to extract nearly all of the propositional content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "For example, the sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "The attacks today come afl.er Shining Path attacks during which least 10 buses were burned throughout Lima on 24 Oct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "did not parse because of the use of \"least\" instead of \"a.t. least\". Hence, the best. Dagment sequence was sought. This consisted of the two fragments \"The attacks today come after Shining Path attacks\" and \"10 buses were burned thronghout Lima on 24 Oct.\" The parses for both these fragments were completely correct. Thus, the only information lost was from the three words \"during which least\". Frequently such information can be recaptured by the pragmatics component. In this case, the burning would be recognized as a consequence of the attack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "In tile first 20 messages of the test set, a best sequence of fragments was sought for the 44 sentences that did not parse for reasons other than timing. A sequence was found for 41 of these; the other three were too short, with problems in the middle. The average number of fragments in a sequence was two. This means that an average of only one structural relationship was lost. Moreover, the fragments covered 88% of the morphemes. That is, even in the case of failed parses, 88% of the propositional content of the sentences was made available to pragmatics. Frequently the lost propositional content is from a preposed or postposed, temporal or causal adverbial, and the actual temporal or causal relationship is replaced by simple logical conjunction of the fragments. In such cases, much useful information is still obtained fl'om the partial results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "For .37% of the 41 sentences, correct syntactic analyses of the fragments were produced. For 74%, the analyses contained three or fewer errors. Correctness did not correlate with length of sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "These numbers could probably be improved. We favored the longest fragment regardless of preference scores. Thus, frequently a high-scoring main clause was rejected because by tacking a noun onto the front of that fragment and reinterpreting the main clause bizarrely as a relative clause, we could form a low-scoring noun phrase that was one word longer. We therefore plan to experiment with combining length and preference score in a more intelligent manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovery from Failed Parses", "sec_num": "2.2" }, { "text": "For sentences of longer than 60 words and for faster, though less accurate, parsing of shorter sentences, we developed a technique we are calling lerminal subsiring parsing. The sentence is segmented into substrings, by breaking it at commas, conjunctions, relative pronouns, and certain instances of the word \"that\". The substrings are then parsed, starting with the last one and working back. For each substring, we try either to parse the substring itself as one of several categories or to parse the entire set of substrings parsed so far as one of those categories. The best such structure is selected, and for subsequent processing, that is the only analysis of that portion of the sentence allowed. The categories that we look for include main, subordinate, and relative clauses, infinitives, w'H) phrases, prepositional phrases, and noun p h rases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terminal Substring Parsing", "sec_num": "2.3" }, { "text": "A simple exalnple is |,lie following, although we do not a.I)ply the technique to sentences or to fragments this short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terminal Substring Parsing", "sec_num": "2.3" }, { "text": "(.h>org(~ ]}US]l, l.lie president, held a press conferen(:e yesterda.y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terminal Substring Parsing", "sec_num": "2.3" }, { "text": "This sentellc(~ would be segmented a.t the conunas. First ' plete and correct interpretations has to be abandoned. llowcver, our experience with TACITUS, especially in the M UC-3 evaluation, has shown that principled techniques for syntactic and pragmatic analysis can be bolstered with methods for achieving robustness, yielding a system with some utility in the short term and showing promise of more in tim long term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "This research has been funded by the Defense Advanced Research Projects Agency under Office of Naval Research contracts N00014-85-C-0013 and N00014-90-C-0220.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Cognitively Plausible Approach to Understanding Complex Syntax", "authors": [ { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Wendy", "middle": [], "last": "Lehnert", "suffix": "" } ], "year": 1991, "venue": "Proceedings, Ninth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "117--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cardie, Claire and Wendy Lehnert, 1991. \"A Cogni- tively Plausible Approach to Understanding Complex Syntax,\" Proceedings, Ninth National Conference on Artificial Intelligence, pp. 117-124.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Preference Semantics for Message Understanding, Proceedings, DARPA Speech and Natural-Language Workshop", "authors": [ { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "J", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "71--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grishman, R., and J. Sterling, 1989. \"Preference Semantics for Message Understanding, Proceedings, DARPA Speech and Natural-Language Workshop, pp. 71-74.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Also ill Readings in Natural Language Processing", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Bobbs", "suffix": "" } ], "year": 1978, "venue": "Lingua", "volume": "44", "issue": "", "pages": "339--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bobbs, Jerry R., 1978. \"Resolving Pronoun Refer- ences\", Lingua, Vol. 44, pp. 311-338. Also ill Readings in Natural Language Processing, B. Grosz, K. Sparck- Jones, and B. Webber, editors, pp. 339-352, Morgan Kaufmann Publishers, Los Altos, Califonlia.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Two Principles of Parse Preference", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Ltobbs", "suffix": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "" } ], "year": 1990, "venue": "Thirteenth International Conference on Computational Linguistics", "volume": "3", "issue": "", "pages": "162--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "ltobbs, Jerry R., and John Bear, 1990. \"Two Princi- ples of Parse Preference\", in H. Karlgren, ed., Proceed- tags, Thirteenth International Conference on Compu- tational Linguistics, Helsinki, Finland, Vol. 3, pp. 162- 167, August, 1990.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Interpretation as Abduction", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stiekel", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Martin", "suffix": "" } ], "year": 1990, "venue": "SRI International Artificial Intelligence Center Technical Note", "volume": "499", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hobbs, Jerry R., Mark Stiekel, Douglas Appelt, and Paul Martin, 1990. \"Interpretation as Abduction\", SRI International Artificial Intelligence Center Tech- nical Note 499, December 1990.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Template Ma.tcher for Robust NL Interpretation", "authors": [ { "first": "Eric", "middle": [], "last": "Jackson", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Appelt", "suffix": "" }, { "first": "John", "middle": [], "last": "Bear", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Podlozny", "suffix": "" } ], "year": 1991, "venue": "Proceedings, DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "190--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jackson, Eric, Douglas Appelt, John Bear, Robert Moore, and Ann Podlozny, 1991. \"A Template Ma.tcher for Robust NL Interpretation\", Proceedings, DARPA Speech and Natural Language Workshop, February 1991, Asilomar, California, pp. 190-194.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Lexico-Sen~a.ntic Pattern Matching as a Companion to Parsing in Text Understanding", "authors": [ { "first": "Paul", "middle": [ "S" ], "last": "Jacobs", "suffix": "" }, { "first": "R", "middle": [], "last": "George", "suffix": "" }, { "first": "Lisa", "middle": [ "F" ], "last": "Krupka", "suffix": "" }, { "first": "", "middle": [], "last": "Rau", "suffix": "" } ], "year": 1991, "venue": "Proceedtags, DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "337--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacobs, Paul S., George R. Krupka, and Lisa. F. Rau, 1991. \"Lexico-Sen~a.ntic Pattern Matching as a Com- panion to Parsing in Text Understanding\", Proceed- tags, DARPA Speech and Natural Language Work- shop, February 1991, Asilomar, California, pp. 337- 341.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A General Syntactic Processor", "authors": [ { "first": "Ronald", "middle": [], "last": "Kaplan", "suffix": "" } ], "year": 1973, "venue": "Ra.ndM1 Rustin, (Ed.) Natural Language Processing", "volume": "", "issue": "", "pages": "193--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaplan, Ronald, 1973. \"A General Syntactic Proces- sor,\" in Ra.ndM1 Rustin, (Ed.) Natural Language Pro- cessing, Algorithmics Press, New York, pp. 193-241.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Parsing the LOB Corpus", "authors": [ { "first": "C", "middle": [ "G" ], "last": "De Marcken", "suffix": "" } ], "year": 1990, "venue": "Proceedings, 28th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "243--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Marcken, C.G., 1990. \"Parsing the LOB Corpus,\" Proceedings, 28th Annual Meeting of the Association for Computational Linguistics, pp. 243-251.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient Bottom-Up Parsing", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Moore", "suffix": "" }, { "first": "J", "middle": [], "last": "Dowding", "suffix": "" } ], "year": 1991, "venue": "Proceedings, DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "200--203", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moore, R.C., and J. Dowding, 1991. \"Efficient Bottom-Up Parsing,\" Proceedings, DARPA Speech and Natural Language Workshop, February 1991, Asilomar, California, pp. 200-203.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "DIAGRAM: A Grammar for Dialogues", "authors": [ { "first": "Jane", "middle": [], "last": "Robinson", "suffix": "" } ], "year": 1982, "venue": "Communications of the A CM", "volume": "25", "issue": "1", "pages": "27--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robinson, Jane, 1982. \"DIAGRAM: A Grammar for Dialogues\", Communications of the A CM, Vol. 25, No. 1, pp. 27-47, January 1982.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Natural Language Inform.alion Processing: A Computer Grammar of English. and Its Applications", "authors": [ { "first": "Naomi", "middle": [], "last": "Sager", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sager, Naomi, 1981. Natural Language Inform.a- lion Processing: A Computer Grammar of English. and Its Applications, Addison-Wesley, Reading, Mas- sachusetts.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Inside Computer Understanding: Five Programs Plus Miniatures", "authors": [ { "first": "Roger", "middle": [], "last": "Sehank", "suffix": "" }, { "first": "C", "middle": [], "last": "Riesbeck", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sehank, Roger and C. Riesbeck, 1981. Inside Com- puter Understanding: Five Programs Plus Miniatures, Lawrence Erlbaum, Hillsdale, New Jersey.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Prolog-like Inference System for Computing Minimum-Cost Abductive Explanations in Natural-Language Interpretation", "authors": [ { "first": "Mark", "middle": [ "E" ], "last": "Stickel", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the International Computer Science Conference-88", "volume": "", "issue": "", "pages": "343--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stickel, Mark E., 1988. \"A Prolog-like Inference System for Computing Minimum-Cost Abductive Explanations in Natural-Language Interpretation\", Proceedings of the International Computer Science Conference-88, pp. 343-350, Hong Kong, December 1988. Also published as Technical Note 451, Artificial Intelligence Center, SRI International, Menlo Park, California, September 1988.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Proceedings, Third Message Understanding Conference (MUC-3)", "authors": [], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sundheim, Beth (editor), 1991. Proceedings, Third Message Understanding Conference (MUC-3), San Diego, California, May 1991.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Partial Parsing: A Report on Work in Progress, Proceedings, DARPA Speech and Natural Language Workshop", "authors": [ { "first": "R", "middle": [], "last": "Weisehedel", "suffix": "" }, { "first": "D", "middle": [], "last": "Ayuso", "suffix": "" }, { "first": "S", "middle": [], "last": "Boisen", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "J", "middle": [], "last": "Palmucci", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "204--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weisehedel, R., D. Ayuso, S. Boisen, R. Ingria, and J. Palmucci, 1991. \"Partial Parsing: A Report on Work in Progress, Proceedings, DARPA Speech and Natural Language Workshop, February 1991, Asilo- mar, California, pp. 204-209.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Language as a Cognitive Process", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Winograd, Terry, 1983. Language as a Cognitive Process, Addison-Wesley, Menlo Park, California.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
2.1
", "text": "Performance of the Scheduling Parser and the GrammarTile fastest parsing algorithms for context-free grammars make use of prediction based on left context to limit the nnmber of nodes and edges the parser must insert into tim chart. However, if robustness in the face of possibly ungramlnatical input or inadequate grammatical coverage is desired, such algorithms are inappropriate.Although the heuristic of choosing tile longest possible substring beginning at the left, that can be parsed as a sentence could be tried (e.g.Grishman and Sterling, 1989), solnetimes, the best fraglnentary analysis of a sentence can only be found by parsing an intermediate or terminal substring that excludes the leftmost words. For this reason, we feel that bottom-up parsing without strong constraints based on left context, are required for robust syntactic analysis.", "type_str": "table", "html": null, "num": null } } } }