{ "paper_id": "1991", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:35:56.483146Z" }, "title": "LOCAL SYNTACTIC CONSTRAINTS", "authors": [ { "first": "Jacky", "middle": [], "last": "Herz", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hebrew University of Jerusalem", "location": { "addrLine": "Giv'at Ram", "postCode": "91904", "settlement": "Jerusalem", "country": "ISRAEL" } }, "email": "" }, { "first": "Mori", "middle": [], "last": "Rimon", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hebrew University of Jerusalem", "location": { "addrLine": "Giv'at Ram", "postCode": "91904", "settlement": "Jerusalem", "country": "ISRAEL" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A method to reduce ambi gu ity at the level of word tagging, on the basis of local syntactic con straints, is described. Such \"short context\" con straints are easy to process and can remove most of the ambi gu ity at that level, which is otherwise a source of great difficulty for parsers and other applications in certain natural lan gu ages. The use of local constraints is also very effective for quick invalidation of a large set of ill-formed inputs. \\Vhile in some approaches local con straints\u2022 are defined manually or discovered by processing of large corpora, we extract them directly from a grammar (typically \u2022context free) of the given lan gu age. We focus on deterministic constraints, but later extend the rnethod for a probabilistic lan gu age model.", "pdf_parse": { "paper_id": "1991", "_pdf_hash": "", "abstract": [ { "text": "A method to reduce ambi gu ity at the level of word tagging, on the basis of local syntactic con straints, is described. Such \"short context\" con straints are easy to process and can remove most of the ambi gu ity at that level, which is otherwise a source of great difficulty for parsers and other applications in certain natural lan gu ages. The use of local constraints is also very effective for quick invalidation of a large set of ill-formed inputs. \\Vhile in some approaches local con straints\u2022 are defined manually or discovered by processing of large corpora, we extract them directly from a grammar (typically \u2022context free) of the given lan gu age. We focus on deterministic constraints, but later extend the rnethod for a probabilistic lan gu age model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Let S = w. , ... , WN be a sentence of length N, {Wi} being the words composing the sentence. Ideally, a lexical-morphological analyzer can assign to each word ITT a unique tag ti , expressing its grammatical characteristics (typi cally part of speech and fe atures). The unique tag image t 1 , \u2022\u2022\u2022 , t N of S could then serve as input to NLP applications, including -but not limited to -parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "1 The first author is also affiHated with the Open University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "In reality, however, vV; may have more than one interpretation, hence t i is not uniquely defined. Examples for ambi gu ity at this level in \u2022 English are nouns (both in singular and _in plural forms) which can_ be often interpreted at word-level as verbs; words ending with \"ing\" which are ambig uous between tentative readings as a progressive verb, a gerund and an adjective; etc. Hebrew, our main lan gu age of study, poses a much greater difficulty, because of the complexity of its morpho-syntax -and the \"terse\" nature of the vowel-free writing system. In modem written Hebrew, nearly 60% of the words in running texts are ambi gu ous with respect to tagging, and the average number of possible readings of words in a running text is found to to be 2.4 (See [Francis 82] for data on English). 2 In addi tion, in many cases the morphological analysis of a Hebrew word yields a -sequence of tags rather than a single tag, and different interpreta tions may be mapped to sequences of different lengths (similar phenomena may be found in other Semitic languages and in Romance lan gu ages where cliticization occurs). This is in fact a different order of the ambiguity issue. Consider as an example the written_ , character string VRD ( 111 ) , which can be interpreted in Hebrewas:", "cite_spans": [ { "start": 765, "end": 777, "text": "[Francis 82]", "ref_id": null }, { "start": 1234, "end": 1245, "text": "VRD ( 111 )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "[ Noun J ( 11 vered 11 ' =\u2022 a rose) or:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "[ Adj J ( 11 varod\" = rosy) or:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "[ Conj , Verb J ( 11 v-red 11 = and descend) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "We will refer to a sequence of M tags (M 2=: N) which is a legal (per word) tag image corre sponding to the sentence S = W1 , ... , WN , as a ", "cite_spans": [ { "start": 38, "end": 47, "text": "(M 2=: N)", "ref_id": null }, { "start": 132, "end": 141, "text": "WN , as a", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction: Local Constraints and their Use", "sec_num": "1." }, { "text": "The second author's main affiliation is the IBM Scientific Center, Haifa, Israel. Please address e-mail correspondence to: rimon@hujics.BITNET rimon@haifasc3.IINUS1 .IBi\\11.CO M 2 The degree of ambiguity is obviously affected by the grain of the tagging system (the level of detail of the tag set).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "be exponential in the length of the sentence if all words are ambiguous. A parser will reduce this number to the minimum feasible. But we are interested in \u2022 quicker, even if not perfect methods to reduce the number of valid paths and word level ambiguity. 3 This paper describes a method to reduce tagging ambiguity, based on local syntactic constraints. A local constraint of length k on a given tag t is a rule disallowing a sequence of k tags from being in the Short Context of t. Intuitively, a Short Context with length k of a tag t in a given sentence S, denoted by SC(t,k) To start with a more formal treatment of the short context notion, let us first add to the sen tence S two special \"words\": \"$ < \", denoting \"Start\" as the beginning of sentence marker, and \"> $\", denoting \"End\" at the end of sentence. These markers are also added to the tag image of the sentence.", "cite_spans": [ { "start": 225, "end": 258, "text": "paths and word level ambiguity. 3", "ref_id": null }, { "start": 573, "end": 580, "text": "SC(t,k)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "We can now look at the resolution of ambi gu ity as a graph searching problem. As an example, suppose we have a sentence with three words, A B C, and assume that the initial tagging output of the lexical analyzer is the following (rather unlikely for English, but quite realistic for Hebrew):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "for A: for B: for C: [ verb ] or [ det, noun ] [ pron ] or [ adv ] [ conj , adj ] or [ noun ] Then we can look at SG, the Sentence Graph, which is a directed graph where arcs represent all a-priori possible locai paths:", "cite_spans": [ { "start": 21, "end": 29, "text": "[ verb ]", "ref_id": null }, { "start": 33, "end": 46, "text": "[ det, noun ]", "ref_id": null }, { "start": 59, "end": 66, "text": "[ adv ]", "ref_id": null }, { "start": 67, "end": 81, "text": "[ conj , adj ]", "ref_id": null }, { "start": 85, "end": 93, "text": "[ noun ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "A B C / 'v erb \"'-. ,; ro x n ? conj -> ad z $ y\" , \ufffd > $ \\ / \u2022\"' ..:.i \ufffd noun/ det -> noun -> adv -->", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "Every path from \"$ <\" to \"> $\" represents a possible interpretation of S as a stream of tags. Note that invalidating even \u2022 a small number of arcs from SG reduces rapidly the number of pos sible paths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "As said above, we use local constraints to remove invalid arcs, and to finally arrive at the Reduced Sentence Graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "Let T be the set of all possible tags -the tag set. The Right Short Context of length n of a tag t is defined by: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "SCr (t,n) for t in T and for n = 0,l,2,J ... tz I z is in T* , } I z I = n or } I z I < n if \">$ \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "path. The number of potentially valid paths can", "sec_num": null }, { "text": "If a formal grammar G exists for the language L, then, by definition, it contains all the syntactic knowledge about L. As such, it also contains the knowledge about Short . Contexts. However,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Local Constraints from a Grammar", "sec_num": "2." }, { "text": "most of this knowledge is not explicit; for example, boundary conditions (the adjacency of a final tag in a constituent phrase with the initial tag of the following phrase) are not explicitly stated in a phrase structure grammar; they have to be extracted to be used for preliminary screening of lexical and morphological ambigui ties as described above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "In the following we will assume that an unre\u2022 -\u2022 stricted context-free phrase structure grammar (CFG), G, exists for the given language; L. Later we will discuss other grammars too. We will use the following notations: where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "T =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "A is in V, a is in ( V U T )*", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "For technical purposes, we will substitute every grammar rule of the form S --> a with an equivalent rule S --> $ < a > $, thus adding the two special terminals mentioned above to T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "We will now revise the definitions of Short Context from chapter 1, relative to the \u2022 given grammar G. The rules in G are the only source for determining the validity of tag sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "The Right Short Context of length n of a ter minal t (tag) relative to the grammar G is defined by: We find the next(t) set by examining P, the rules of G: 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "r SC (t,n) for t in T and for n = 0,l,2,3 .\u2022. G tz I z is in T* , } I z I = n or } IZI < n if \">$ \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "1. If there is a rul e in P of the form: A --> at x fJ and x is in T, then x is in next( t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "202", "sec_num": null }, { "text": "A --> at B /J and Bi s in V, then the set fi rst( B) is a subset of next ( t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "3. If.there is a rul e in P of the form: A --> at then the set fol low( A ) is a subset of next ( t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "The computational complexity of the con struction of the set next( t ) . depends, on the complexity of computing the first and follow set. There are well known algorithms to fi.ndi these sets from a given CFG. The complexity of follow( t ) is exponential in the size of the look ahead window, which is the length of the context. This is another reason to limit the con texts to really short ones ( although note that the extraction of constraints from the grammar is a one-time preprocessing phase, hence the per formance issue is not critical).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "To conclude this chapter, we borrow the concept of event dependency from probability theory, just to offer the following view on short context constraints. The events being concat enation of tags, the short context \u2022 basically defines independent constraints, while in the full grammar the dependent constraints are expressed. This distinction is particularly apparent in SCr (t,l) or SCl(t,l), wher-e \"events\" only apply to a pair of neighbors; as the context gets longer, the constraints become more dependent and closer to the full grammar. The metaphorical description above gets especially interesting when a statistical dimension is added to the model (see chapter 4). There, indeed, SC( 1) considers independent probabilities of possible neighbors, where a full probabilistic grammar is supposed to look at the dependent events of tag concatenation along the full sen tence.", "cite_spans": [ { "start": 376, "end": 381, "text": "(t,l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "It is therefore clear that the Short Context tech nique will license more sentences than a grammar would; or, from a dual point of view, it will invalidate only part of the impossible com binations of tag assi gnm ent. SCr(t,2) will have a closer fit coverage than SCr(t,l), and only in SCr(t,N) (where N is the finite length of a given sentence) the licensing power will be identical to the weak generative capacity of the full grammar (see illustration). However, SCr(t,N) has only the time complexity of a finite automaton (beware space complexity, though). The (theore tical and empirical) rate of convergence of the finite approximation is an interesting and impor tant research topic. If indeed for \u2022a rather small number M, SCr(t, M) provides most of the licensing power of a given full grammar, then the performance promise of short context methods is consequential for a variety of applications ( cf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "[Church 80]). As mentioned before, it appears that even SCr(t, 1) can drastically reduce the a-priori polynomial number of tag sequences, typically to a number linearly proportional to the length of the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "If there is a rul e in P of the form :", "sec_num": "2." }, { "text": "Consider the following \"toy grammar\" \u2022 for a small :fragment of English ( a variant of the basic sample grammar in [Tomita 86]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3." }, { "text": "The tag set includes only: n (noun), v (verb), det (determiner), adj ( adjective ) and prep (preposi-_ tion). The context free grammar G is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3." }, { "text": "S --> $< NP VP >$ NP --> det n NP --> n NP --> adj n NP --> det adj n NP --> NP PP PP --> prep NP VP --> V NP VP _.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3." }, { "text": ":.> VP PP G is a slightly mo\ufffdified version of a standard grammar, where the special symbols \"$ <\" (start) and \"> $\" ( end) are added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3." }, { "text": "To extract the local constraints from this grammar, we first compute the function next( t ) for every tag t in T, and from the result sets we obtain the graph below, showing valid moves in the short context of length 1 (validity is, of course, relative to the given toy grammar) : r The SC (t,1) Graph G $< \ufffd det 7-..", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "3." }, { "text": "V \ufffd --,-, -\u2022 ---\ufffdTv---_ -,l' -\u2022 -p rep", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X\ufffd l_\\", "sec_num": null }, { "text": "The table of valid neighbors is derived directly from the graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X\ufffd l_\\", "sec_num": null }, { "text": "r SC (t,1) Tabl e G $< det $< n $< adj prep det prep n prep adj V det V n V adj adj n det adj det n n V n prep n >$", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "X\ufffd l_\\", "sec_num": null }, { "text": "This table describes the closure of next( t ) for all terminals in G. SC(t,l) table, relative to T2 . Here, information about terminal pairs which can never appear in a legal sentence is represented. Such a table may be used by grammar developers to test a grammar, presenting small \"checklist tests\" which are easy to make. PSC(t, l ,i) possibilities. This is done by tracing the way from \"$ < \" forward. The Positional Short Context tables are the following:", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "SC(t,l)", "ref_id": null }, { "start": 325, "end": 337, "text": "PSC(t, l ,i)", "ref_id": null } ], "eq_spans": [], "section": "X\ufffd l_\\", "sec_num": null }, { "text": "r PSC ( t, 1, i) G Position: 0 ---> 1 1 ---> 2 $< det det n $< n \u2022det adj $< adj n prep n V n >$ adj n 2 ---> 3 n V n prep n > $ prep det prep n prep adj V det V n v adj adj n 3 ---> 4 V n prep det adj", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From the SC(t, l) graph above we can now extract information about the Positional", "sec_num": null }, { "text": "Another useful information one can obtain from the SC(t, l) graph is the inverse of the tables above -the Positional SC that may be allowed when going from the end of a sentence back wards. This is, in fact, the Positional Left Short Context. What has to be done to create the tables is to invert every arc in the SC(t, 1) graph. Other than that, the procedure is the same. It is interesting to note that in our example the closure appears later when scanning the sentence backwardsfrom right to left.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Note that from positions 3-> 4 on, the table gets identical to the general SC(t, 1) table (the closure).", "sec_num": null }, { "text": "A final technical comment before showing the operation on a sample sentence: When the short context of distinct occurrences of the same terminal is different, it is useful to distin gu ish between them using an index. This will add more information about the PSC when tracing the Sentence Graph. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Note that from positions 3-> 4 on, the table gets identical to the general SC(t, 1) table (the closure).", "sec_num": null }, { "text": "The method described above can be extended to be useful in a variety of situations other than those presented. In this chapter we briefly discuss several such extensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4; Extensions", "sec_num": null }, { "text": "We already demonstrated how effective and effi cient word tagging and path reduction can be used in a pre-parsing filter. We also mentioned applications ( e.g. some types of proof-reading aids) which do not call for full parsing, but require \"stand al one\" tagging disambi gu ation and can benefit from fast recognition of many illegal inputs. We now turn to discuss a probabilistic lan gu age 206 model, and see how short context considerations can be extended to account for probabilistic con straints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4; Extensions", "sec_num": null }, { "text": "In the probabilistic environment , adjacent tags are not only valid (1) or invalid (0), but are allowed in any given probability between O and 1. This model may be more realistic for NLP systems which process real-life texts, where some phenomena happen more frequently than others. The Short Context tables will therefore have to include weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4; Extensions", "sec_num": null }, { "text": "We will first assume that a probabilistic context free grammar, such as described by [Fujisaki 89] To conclude this chapter, we note that one may consider construction of deterministic grammars from corpora. Here the rules themselves will be \u2022 defined based on data found in the text. Such grammars tend to be very large (cf. [Atwell 88]) . Part of the reason is the grain of the tag set: such grammars might be inflated by the creation of \"families\" of very similar rules, not being able to recognize a generalization over similar tags. Another reason is in the distribution of rules (phrase structure) -only a small number of rules apply in a significant number of sample sen tences, while most of the rules were derived from single examples. The performance efficiency of parsers ( deterministic or probabilistic) based on such methods will greatly suffer from the large size of the grammar. But for the processing of local constraints, the size of the grammar is not terribly important. Once the preprocessing phase has been completed, the actual testing of con straints is not badly aff \ufffdcted by the size of the constraints tables, thus making the local con straints approach effective in such an environ ment as well.", "cite_spans": [ { "start": 85, "end": 98, "text": "[Fujisaki 89]", "ref_id": null }, { "start": 321, "end": 338, "text": "(cf. [Atwell 88])", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "4; Extensions", "sec_num": null }, { "text": "\\V e have not attempted a rigorous discussion of the performance gains expected \\\\Then applying tagging disambiguation in a pre-parsing filter and/or in the parsing process i\ufffdself. The question is not easy. It strongly depends. on the parsing technique, on one hand, ,and on the degree of ambi gu ity at the -given lan gu age ( as reflected in a given grammar) , on the other hand. Naive bot tom-up parsers, which assume a single combina tion of tags in each analysis pass against the grammar, can certainly benefit, by drastically reducing the exponential number of passes needed a-priori in cases of heavy ambiguity. Other \u2022 more sophisticated parsing techniques ( cf. [Kay 80] , for example), can also save in compu tational complexity, by taking earlier decisions on inconsistent tag assi gnm ents and/or by requiring a smaller grammar. The detailed anal ysis here is not simple. But it seems that, although the constraints are drawn only from the grammar, and as such they are somehow expressed ( explicitly or implicitly) and will take effect during parsing, the different order of com putation and the restriction to finite-length con siderations are sources for considerable time savmg.", "cite_spans": [ { "start": 665, "end": 679, "text": "( cf. [Kay 80]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Final Remarks", "sec_num": "5." }, { "text": "Another important question concerns properties of the grammar that help build an effective filter of tentative paths. The grain of the tag set is such a significant factor. A better refmed tag set helps express more refined syntactic claims, but it also gives rise to a greater level of tagging ambiguity. It also requires a larger grammar ( or longer lists of conditions on features, attached to phrase structure rules, which we here assume to be already reflected in the rules themselves) , hence a larger set of local constraints. But these constraints will be much more specific and therefore more effective in resolving ambiguities. A rigorous analysis of this issue will help under stand better what makes an effective disambigu ator. An important point to make is that our method gu arantees uniformity of the tag set used for the filter and for any parser acting upon the given grammar, thus making it useful in a variety of environments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Remarks", "sec_num": "5." }, { "text": ";\\;ote that the two sub-goals of the tagging ambiguity problem -reducing the number of paths and reducing word-level possibilities -are not identieal. One can easily construct sample sentences where each word is two-way ambiguous, hence the sentence has 2 N potential paths, of which only two are valid, while still keeping all word level ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The fu nctions \"first\" and \"follow \"' are used here much like in standard parsing techniques for both programming languages and natural languages; see [Aho 72] as a general reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Th \ufffd sentence is analyzed here relative to the limited tag set of the sample grammar. Depending on the tag set, the lex1con and the grammar, the level of ambiguity (and the results i\ufffd this particular case) may be different.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It may not be absolutely required that only cases appearing in correct analyses are counted. Data resulting from wrong analyses may turn to be statistically insignificant, relative to real and frequent phenomena. cf.[Dagan 90].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "M. Bahr and E. Lozinskii gave us helpful com ments and suggestions on earlier drafts of this paper. We gratefully acknowledge their contrib ution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "[Aho 72] Alfred V. Aho and Jeffrey D. Gllman. The Theory of Parsing, Translation and Compiling. Prentice-Hall , 1972-3 CSL-80-12, 1980. Reprinted in Readings in Natural Language Processing, Grosz, Sparck Jones and Webber (eds.), Morgan Kaufman, 1986. [Lozinskii 86] Eliezer L. Lozinskii and Sergei Nircnburg. Parsing in Parallel. Comp. Lan guages, UK, Vol 11, No. 1, pp 39-5 1, 1986. [ Humanities, Vol. 17, pp. 139-150, 1983. rshamir 74] Eli Shamir and Catriel Beeri. Checking Stacks and Context Free Programmed Grammars Accept P-complete Languages. Proc. of the 2nd Colloq. on Automata languages and programming, Lecture Notes in Computer Science, Vol. 14, pp 27-33, 1974. . [Tomita 867 Masaru Tomita. Effi cient Parsing fo r Natura[ Language, Kluwer Academic Pub., 1986. rwright 89] J. H. Wright and E. N. Wrigley. Probabilistic LR Parsing for Speech Recogni tion. Proc. of the I st International Parsing Workshop, Pittsburgh, June 1989. ", "cite_spans": [ { "start": 19, "end": 118, "text": "Aho and Jeffrey D. Gllman. The Theory of Parsing, Translation and Compiling. Prentice-Hall , 1972-3", "ref_id": null }, { "start": 119, "end": 213, "text": "CSL-80-12, 1980. Reprinted in Readings in Natural Language Processing, Grosz, Sparck Jones and", "ref_id": null }, { "start": 214, "end": 250, "text": "Webber (eds.), Morgan Kaufman, 1986.", "ref_id": null }, { "start": 277, "end": 383, "text": "Lozinskii and Sergei Nircnburg. Parsing in Parallel. Comp. Lan guages, UK, Vol 11, No. 1, pp 39-5 1, 1986.", "ref_id": null }, { "start": 386, "end": 425, "text": "Humanities, Vol. 17, pp. 139-150, 1983.", "ref_id": null }, { "start": 442, "end": 673, "text": "Shamir and Catriel Beeri. Checking Stacks and Context Free Programmed Grammars Accept P-complete Languages. Proc. of the 2nd Colloq. on Automata languages and programming, Lecture Notes in Computer Science, Vol. 14, pp 27-33, 1974.", "ref_id": null }, { "start": 688, "end": 772, "text": "Masaru Tomita. Effi cient Parsing fo r Natura[ Language, Kluwer Academic Pub., 1986.", "ref_id": null }, { "start": 791, "end": 939, "text": "Wright and E. N. Wrigley. Probabilistic LR Parsing for Speech Recogni tion. Proc. of the I st International Parsing Workshop, Pittsburgh, June 1989.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "References", "sec_num": null } ], "bib_entries": {}, "ref_entries": { "FIGREF0": { "text": "The set of Terminal symbols (the tag-set) $ < = The sentence start terminal > $ = The sentence end terminal V = The set of Variables ( non-terminals ) S = The root variable for derivations P = Production rules of the form A --> a ,", "uris": null, "num": null, "type_str": "figure" } } } }