{ "paper_id": "W89-0207", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:44:34.277094Z" }, "title": "P a r s in g w it h P r in c ip le s : P r e d i c t in g a P h r a s a l N o d e B e f o r e I ts H e a d A p p e a r s 1 2", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "eafg;3>cad. cs.cmu.edu 1 Introduction Recent work in generative syntactic theory has shifted the conception of a natural language grammar from a hom ogeneous set of phrase structure (P S) rules to a heterogeneous set of well-formedness constraints on representations (see, for exam ple, C hom sky (1981), Stowell (1981), C hom sky (1986a) and Pollard k Sag (1987)). In these theories it is assumed that the grammar contains principles that are independent of the language being parsed, together with principles that are parameterized to reflect the varying behavior of different languages. However, there is more to a theory of human sentence processing than just a theory of linguistic com petence. A theory of performance consists of both linguistic knowledge and a parsing algorithm. T his paper will investigate ways of exploiting principle-based syntactic theories directly in a parsing algorithm in order to determ ine whether or not a principle-based parsing algorithm can be com patible with psycholinguistic evidence. Principle-based parsing is an interesting research topic not only from a psycholinguistic point of view but also from a practical point o f view. W hen PS rules are used, a separate grammar must be written for each language parsed. Each of these gramm ars contains a great deal of redundant information. For exam ple, there may be two rules, in different grammars, that are identical except for the order of the constituents on the right hand side, indicating a difference in word order. T his redundancy can be avoided by em ploying a universal phrase structure com ponent (not necessarily in the form of rules) along with parameters and associated values. A principles and parameters approach provides a single com pact gram m ar for all languages that would otherw ise be represented by many different (and redundant) PS gramm ars. Any model of human parsing m ust dictate: a) how structures are projected from the lexicon; b) how structures are attached to one another; and c) what constraints affect the resultant structures. T his paper will concentrate on the first two com ponents with respect to principle-based parsing algorithms: node projection and structure attachm ent. T w o basic control structures exist for any parsing algorithm: data-driven control and hypothesis-driven control. Even if a parser is predom inantly hypothesis-driven, the predictions that it makes m ust at some point be com pared with the data that are presented to it. Som e data-driven com ponent is therefore necessary for any parsing algorithm. Thus, a reasonable hypothesis to test is that the hum an parsing algorithm is entirely data-driven. T his is exactly the approach that is taken by a number of principle-based parsing algorithms (see, for exam ple, A b n ey (1986), Kashket (1987), Gibson & : Clark (1987) and Pritchett (1987)). T hese parsing algorithm s each include a node projection algorithm that projects an input word to a m axim al category, but does not cause the projection of any further nodes. A lthough this sim ple strategy is attractive because of its simplicity, it turns out that it cannot account for certain phenom ena observed in the processing of D utch (Frazier (1987): see Section 2.1). A com pletely data-driven node projection algorithm also has difficulty accounting for the processing ease of adjective-noun constructions in English (see Section 2.2). As a result of this evidence, a purely data-driven node projection 1 Paper presented at the International W orkshop on Parsing Technologies, A ugust 28-31, 1989. 2 I would like to thank R obin Clark, Rick K azm an, Howard K urtzm an, Eric N yberg and Brad P ritch ett for their com m ents on earlier drafts of this paper, and I offer the usual disclaim er.-63-International Parsing Workshop '89 algorithm must be rejected in favor of a node projection algorithm that has a predictive (hypothesis-driven) com ponent Frazier (1987)). This paper describes a node projection algorithm that is part of the Constrained Parallel Parser (C P P) (Gibson (1987), Gibson k Clark (1987) and Clark & Gibson (1988)). This parser is based on the principles of G overnm ent-B inding theory (C hom sky (1981, 1986a)). Section 3.1 gives an overview of the C P P model, while Section 3.2 describes the node projection algorithm. Section 3.3 describes the attachm ent algorithm, and includes an exam ple parse. These node projection and attachm ent algorithms dem onstrate that a principle-based parsing algorithm can account for the D utch and English data, while avoiding the existence of redundant phrase structure rules. T hus it is concluded that one should continue to investigate hypothesisdriven principle-based m odels in the search for an optim al psycholinguistic model. 2 Data-Driven N ode Projection: Empirical Predictions and Results 2.1 Evidence from Dutch Consider the sentence fragment in (1): (1) ... dat het meisje van Holland ... ... \"that the girl from Holland\" ... D utch is like English in that prepositional phrase modifiers of nouns may follow the noun. T hus the prepositional phrase van Holland may be a modifier of the noun phrase the girl in exam ple (1). Unlike English, however, D utch is SO V in subordinate clauses. Hence in (1) the prepositional phrase van Holland may also be the argum ent of a verb to follow. In particular, if the word ghmlachte (\"sm iled\") follows the fragment in (1), then the prepositional phrase van Holland can attach to the noun phrase that it follows, since the verb ghmlachte has no lexical requirements (see (2a)). If, on the other hand, the word houdt (\"likes\") follows the fragment in (1), then the P P van Holland must attach as argum ent of the verb houdt, since the verb requires such a com plem ent (see (2b)).", "pdf_parse": { "paper_id": "W89-0207", "_pdf_hash": "", "abstract": [ { "text": "eafg;3>cad. cs.cmu.edu 1 Introduction Recent work in generative syntactic theory has shifted the conception of a natural language grammar from a hom ogeneous set of phrase structure (P S) rules to a heterogeneous set of well-formedness constraints on representations (see, for exam ple, C hom sky (1981), Stowell (1981), C hom sky (1986a) and Pollard k Sag (1987)). In these theories it is assumed that the grammar contains principles that are independent of the language being parsed, together with principles that are parameterized to reflect the varying behavior of different languages. However, there is more to a theory of human sentence processing than just a theory of linguistic com petence. A theory of performance consists of both linguistic knowledge and a parsing algorithm. T his paper will investigate ways of exploiting principle-based syntactic theories directly in a parsing algorithm in order to determ ine whether or not a principle-based parsing algorithm can be com patible with psycholinguistic evidence. Principle-based parsing is an interesting research topic not only from a psycholinguistic point of view but also from a practical point o f view. W hen PS rules are used, a separate grammar must be written for each language parsed. Each of these gramm ars contains a great deal of redundant information. For exam ple, there may be two rules, in different grammars, that are identical except for the order of the constituents on the right hand side, indicating a difference in word order. T his redundancy can be avoided by em ploying a universal phrase structure com ponent (not necessarily in the form of rules) along with parameters and associated values. A principles and parameters approach provides a single com pact gram m ar for all languages that would otherw ise be represented by many different (and redundant) PS gramm ars. Any model of human parsing m ust dictate: a) how structures are projected from the lexicon; b) how structures are attached to one another; and c) what constraints affect the resultant structures. T his paper will concentrate on the first two com ponents with respect to principle-based parsing algorithms: node projection and structure attachm ent. T w o basic control structures exist for any parsing algorithm: data-driven control and hypothesis-driven control. Even if a parser is predom inantly hypothesis-driven, the predictions that it makes m ust at some point be com pared with the data that are presented to it. Som e data-driven com ponent is therefore necessary for any parsing algorithm. Thus, a reasonable hypothesis to test is that the hum an parsing algorithm is entirely data-driven. T his is exactly the approach that is taken by a number of principle-based parsing algorithms (see, for exam ple, A b n ey (1986), Kashket (1987), Gibson & : Clark (1987) and Pritchett (1987)). T hese parsing algorithm s each include a node projection algorithm that projects an input word to a m axim al category, but does not cause the projection of any further nodes. A lthough this sim ple strategy is attractive because of its simplicity, it turns out that it cannot account for certain phenom ena observed in the processing of D utch (Frazier (1987): see Section 2.1). A com pletely data-driven node projection algorithm also has difficulty accounting for the processing ease of adjective-noun constructions in English (see Section 2.2). As a result of this evidence, a purely data-driven node projection 1 Paper presented at the International W orkshop on Parsing Technologies, A ugust 28-31, 1989. 2 I would like to thank R obin Clark, Rick K azm an, Howard K urtzm an, Eric N yberg and Brad P ritch ett for their com m ents on earlier drafts of this paper, and I offer the usual disclaim er.-63-International Parsing Workshop '89 algorithm must be rejected in favor of a node projection algorithm that has a predictive (hypothesis-driven) com ponent Frazier (1987)). This paper describes a node projection algorithm that is part of the Constrained Parallel Parser (C P P) (Gibson (1987), Gibson k Clark (1987) and Clark & Gibson (1988)). This parser is based on the principles of G overnm ent-B inding theory (C hom sky (1981, 1986a)). Section 3.1 gives an overview of the C P P model, while Section 3.2 describes the node projection algorithm. Section 3.3 describes the attachm ent algorithm, and includes an exam ple parse. These node projection and attachm ent algorithms dem onstrate that a principle-based parsing algorithm can account for the D utch and English data, while avoiding the existence of redundant phrase structure rules. T hus it is concluded that one should continue to investigate hypothesisdriven principle-based m odels in the search for an optim al psycholinguistic model. 2 Data-Driven N ode Projection: Empirical Predictions and Results 2.1 Evidence from Dutch Consider the sentence fragment in (1): (1) ... dat het meisje van Holland ... ... \"that the girl from Holland\" ... D utch is like English in that prepositional phrase modifiers of nouns may follow the noun. T hus the prepositional phrase van Holland may be a modifier of the noun phrase the girl in exam ple (1). Unlike English, however, D utch is SO V in subordinate clauses. Hence in (1) the prepositional phrase van Holland may also be the argum ent of a verb to follow. In particular, if the word ghmlachte (\"sm iled\") follows the fragment in (1), then the prepositional phrase van Holland can attach to the noun phrase that it follows, since the verb ghmlachte has no lexical requirements (see (2a)). If, on the other hand, the word houdt (\"likes\") follows the fragment in (1), then the P P van Holland must attach as argum ent of the verb houdt, since the verb requires such a com plem ent (see (2b)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "... \"that the girl likes Holland\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following A bney (1986), Frazier (1987) , Clark k Gibson (1988) and numerous others, it is assum ed that attached structures are preferred over unattached structures. If we also assum e that a phrasal node is not projected until its head is encountered, we predict that people will entertain only one hypothesis for the sentence fragm ent in ( 1 ): the modifier attachm ent. T hus we predict that it should take longer to parse the continuation houdt ( \"likes\" ) than to parse the continuation ghmlachte ( \"sm iled\" ), since the continuation houdt forces the prepositional phrase to be reanalyzed as an argum ent of the verb. However, contrary to this prediction, the verb that allows argum ent attachm ent is actually parsed faster than the verb that necessitates modifier attach m en t in sentence fragments like ( 1 ). If the verb had been projected before its head was encountered, then the argum ent attachm ent of the PP van Holland would be possible at the same tim e that the modifier a ttach m en t is possible.3 T hus Frazier concludes that in som e cases phrasal nodes must be projected before their lexical heads have been encountered.", "cite_spans": [ { "start": 25, "end": 39, "text": "Frazier (1987)", "ref_id": "BIBREF5" }, { "start": 42, "end": 63, "text": "Clark k Gibson (1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "3 It is beyond the scope of this paper to offer an explanation as to why the argum ent attach m en t is in fact preferred, to the m odifier attach m en t. T his paper seeks only to dem onstrate that the argum ent attach m en t possibility m ust at least be available for a psychologically real parser. See A b ney (1986), Frazier (1987) and Clark U G ibson (1988) for possible explanations for the preference phenom enon.", "cite_spans": [ { "start": 322, "end": 336, "text": "Frazier (1987)", "ref_id": "BIBREF5" }, { "start": 341, "end": 363, "text": "Clark U G ibson (1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "-64-International Parsing Workshop '89 2.2 E v id e n c e from E n glish A second piece of evidence against this limited type of node projection is provided by the processing of noun phrases in English that have more than one pre-head constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is assum ed that the primitive operation of attachm ent is associated with a certain processing cost.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Hence the am ount of time taken to parse a single input word is directly related to the number of attachm ents that the parser must execute to incorporate that structure into the existing structure(s). If a phrasal node is not projected until its head is encountered, then parsing the final word of a head-final construction will involve attaching all its pre-head structures at that point. If, in addition, there is more than one pre-head structure and no attachm ents are possible until the head appears, then a significant proportion of processing time should be spent in processing the head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The hypothesis that a phrasal node is not projected until its head is encountered can be tested with the English noun phrase, since the head of an English noun phrase appears after a specifier and any adjectival modifiers. For exam ple, consider the English noun phrase the big red book. First, the word the is read and a determiner phrase is built. Since it is assumed that nodes are not projected until their heads are encountered, no noun phrase is built at this point. The word big is now read and causes the projection of an adjective phrase. A ttachm ents are now attem pted between the two structures built thus far. Neither of the categories can be argum ent, specifier or modifier for the other, so no attachm ent is possible. T h e next word red now causes the projection of an adjective phrase, and once again no attachm ents are possible. Only when the word book is read and projected to a noun phrase can attachm ents take place. First the adjective phrase representing red attaches as a modifier of the noun phrase book. T hen the A P representing big attaches as a modifier of the noun phrase just constructed. Finally the determiner phrase representing the attaches as specifier of the noun phrase big red book.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Thus if we assum e that a phrasal node is not projected until its head is parsed, we predict that a greater number of attachm ents will take place in parsing the head than in parsing any other word in the noun phrase. Since it is assum ed that an attachm ent is a significant parser operation, it is predicted that people should take more time parsing the head of the noun phrase than they take parsing the other words of the noun phrase. Since there is no psycholinguistic evidence that people take more time to process heads in head-final constructions, I hypothesize that phrasal nodes are being projected before their heads are being encountered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This paper assum es the C onstrained Parallel Parser (C P P ) as its model of human sentence processing (see Gibson (1987) ", "cite_spans": [ { "start": 109, "end": 122, "text": "Gibson (1987)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Parsing Model: T he Constrained Parallel Parser", "sec_num": "3.1" }, { "text": "A lexical entry accessed by C P P consists of, am ong other things, a theta-gnd. A theta-grid is an unordered list of theta structures. Each theta structure consists of a them atic role and associated subcategorization formation. One theta structure in a theta-grid may be marked as indirect to refer to its subject. For exam ple, the word shout might have the following theta-grid:4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L ex ica l E n tries for C P P", "sec_num": null }, { "text": "(3 ) ((Subcat = PREP, Thematic-Role = GOAL) (Subcat = COMP, Thematic-Role = PR0P0SITI0H)) W hen the word shout (or an inflected variant of shout) is encountered in an input phrase, the them atic role agent will be assigned to its subject, as long as this subject is a noun phrase. T he direct them atic roles goal and proposition will be assigned to prepositional and com plem entizer phrases respectively, as long as each is present. Since the order of theta structures in a theta-grid is not relevant to its use in parsing, the above theta-grid for shout will be sufficient to parse both sentences in (4 ). 3.1.2 X Theory in C P P T he C P P m odel assum es X T heory as present in C hom sky (1986b). X T heory has two basic principles: first, each tree structure m ust have a head; and second, each structure must have a m axim al projection. As a result of these principles and other principles, (e.g., the 0-C riterion, the Extended Projection Principle, Case T h eory), the positions of arguments, specifiers and-modifiers w ith respect to the head of a given structure are limited. In particular, a specifier m ay only appear as a sister to the one-bar projection below a m axim al projection, and the head, along with its arguments, m ust appear below the one-bar projection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L ex ica l E n tries for C P P", "sec_num": null }, { "text": "T he orders of the specifier and argum ents relative to the head is language dependent. For exam ple, the basic structure of English categories is shown below. Furthermore, binary branching is assum ed (K ayne (1983)), so that modifiers are C hom sky-adjoined to the two-bar or one-bar levels, giving one possible structure for a post-head modifier below on the right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L ex ica l E n tries for C P P", "sec_num": null }, { "text": "A rgu m en t* ^^^M o d i f l e r X A rgum en t* ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S p e c i f i e r^j^ S p e o f i e r^X", "sec_num": null }, { "text": "Node projection proceeds as follows. First a lexical item is projected to a phrasal node: a Confirmed node (C-node). Following X Theory, each lexical entry for a given word is projected maximally. For exam ple, the word rock, which has both a noun and a verb entry would be projected to at least two m axim al projections: In English the argum ent projection parameter is set to *head*, so that argum ents appear after the head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Projection of Nodes from the Lexicon", "sec_num": "3.2" }, { "text": "Hence, if a lexical entry has requirem ents that m ust be filled, then structures corresponding to subcategorized As a result of the H-node Projection Constraint. H-nodes may not invoke H-node projection. For example, if a specifier causes the projection of its head, the resulting head cannot then cause the projection of those categories that it may specify. As a result, the number of nodes that may be projected from a single lexical item is severely restricted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Projection of Nodes from the Lexicon", "sec_num": "3.2" }, { "text": "Given the above node projection algorithm, it is necessary to define an algorithm for attachm ent of nodes. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N ode A ttachm ent", "sec_num": "3.3" }, { "text": "Since the preposition beside subcategorizes for a noun phrase, there is an H-node N P attached as its object. T he attachm ent algorithm should allow a single attachm ent at this point: the noun phrase representing Frank uniting with the H-node N P object of beside:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N ode A ttachm ent", "sec_num": "3.3" }, { "text": "(14) [pp [p' [p beside ] [ s p Frank ]]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N ode A ttachm ent", "sec_num": "3.3" }, { "text": "As should be clear from the two exam ples, the process of attachm ent involves comparing a previously predicted category with a current category. If the two categories are compatible, then attachm ent m ay be viable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N ode A ttachm ent", "sec_num": "3.3" }, { "text": "Compatibility is defined in terms of unification, which is defined terms o f subsumption.8 A structure X is said to subsume a structure V' if X is more general than Y. T hat X contains less specific information them Y. So, for exam ple, a structure that is specified as clausal ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N ode Com patibility", "sec_num": "3.3.1" }, { "text": "Roughly speaking, the attachm ent operation should locate an H-node in a structure on the stack along with a compatible node in a structure in the buffer. If both of these structures have parent tree structures, then these parent tree structures must also be compatible. In order to keep the process of attachm ent simple, it is proposed that each attachm ent have at most one compatibility This constraint is given in (16):9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "Attachm ent Constraint: At most one nontrivial lexical feature unification is perm itted per attachm ent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "A nontrivial unification is one th at involves two nontrivial structures; a trivial unification is one that involves at least one trivial structure. For example, if the parent node of the buffer site is as of yet undefined, then the parent node of the stack site trivially unifies with this parent node. Only when both parents are defined is there a nontrivial unification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "Consider the effect of the following three requirements: first, the lexical features of the stack and buffer attachm ent sites must be compatible; second, the tree structures above the buffer and stack attachm ent sites must be compatible; and third, at most one lexical feature unification is permissible per derivation, (16). Since any attachm ent must involve at least one nontrivial lexical feature unification, that of the stack and buffer sites, any additional nontrivial unifications will violate the attachm ent constraint in (16). If both the buffer and stack attachment sites have parent tree structures, then the lexical features of these parents will need to be unified. Since the child structures will also need to be unified, (16) will be violated. Thus it follows that, in an attachment, either the buffer site or the stack site has no parent tree structure.10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "Since the order of the words in the input must be maintained in a final parse, only those nodes in a buffer structure that dominate all lexical items in that structure are permissible as attachment sites. For example, suppose that the buffer contained a representation for the noun phrase women in college. Furthermore, suppose that there is an H-node NP on the stack representing the word the. Although it would be suitable for the buffer structure representing the entire noun phrase women in college to match the stack H-node, it would not be suitable for the C-node NP representing college to match this H-node. This attachment would result in a structure that moved the lexical input women in to the left of the lexical input dominated by the matched H-node, producing a parse for the input women m the college. Since the word order of the input string must be maintained, sites for buffer attachment must dominate all lexical items in the buffer structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "Once suitable maximal projections in each of the buffer and stack structures have been identified for matching, it is still necessary to check that their internal structures are compatible. For example, suppose that an identified buffer site is a C-node whose head allows exactly one specifier and a specifier is already attached. If the stack H-node site also contains a specifier, then the attachment should be blocked. On the other hand, if the stack H-node site does not contain a specifier, and other requirements are satisfied, then the attachment should be allowed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "Testing for internal structure compatibility is quite simple if all tree structures are assumed to be binary branching ones. The only possible attachment sites inside the stack H-node are those nodes that dominate no other nodes. As long as there is some buffer node that both dominates all the buffer input and matches the H-node attachment site for bar level, then the attachment is possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A ttachm ent", "sec_num": "3.3.2" }, { "text": "A structure W in the buffer can attach to a structure X on the stack iff all of (a ), (b ), (c ), (d) If attachment is viable, then W contains a structure Y that is bar-level compatible with a structure Z that is part of X . Since Y and Z are bar-level compatible, there are structures 5 and T inside Y and Z When the conditions for attachment are satisfied, structures W and X are united in the following way. First. \\ V and X are copied to nodes W ' arid X ' respectively. Inside X ' there is a node, Z ' , that is a copy of Z. The lexical features of Z' axe set to the unification of the lexical features of structures Y and Z . Next, structure V in Z ' (corresponding to structure T in Z ) is replaced by S ' , the copy of structure 5 inside W . The bar level of V is set to the unification of the bar levels of structures 5 and T .", "cite_spans": [ { "start": 98, "end": 101, "text": "(d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attachment is formally defined in (17): (17)", "sec_num": null }, { "text": "Finally, the tree structures above Y and Z are unified and this tree structure is attached above Z ' That is, if Z has some parent tree structure and Y does not, then the copy of this structure inside X ' is attached above Z ' . Similarly, if Y has some parent tree structure and Z does not, then the copy of this structure inside \\ V is attached above Z ' . If neither node has any parent tree structure (i.e., W -Y , X = Z), then the unification is trivial and no attachment is made. Since V and Z cannot both have parent tree structures (see (16) and the discussion following it), unifying the parent tree structures is a very simple process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attachment is formally defined in (17): (17)", "sec_num": null }, { "text": "respectively, that satisfy the conditions of bar-level compatibility, ( 1 8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attachment is formally defined in (17): (17)", "sec_num": null }, { "text": "As an illustration of how attachm ents take place, consider once again the noun phrase the big red book. First the determiner the is read and is projected to a C-node determiner phrase. Since a determiner is allowable as the specifier of a noun phrase and specifiers occur before the head in English, an H-node NP is also built. These two structures are depicted in Since there is nothing on the stack, these structures are shifted to the top of the stack. The word big projects to both a C-node AP and an H-node NP since an adjective is allowable as a pre-head modifier in English. These two structures are placed in the buffer (depicted in (20)). An attachm ent between nodes (19b) and (20b) is now attem pted. Note that: a) node (20b) is a maximal projection dominating all lexical material in its buffer structure; b) node (19b) is a maximal projection Hnode on the stack; c) the tree structures above these two nodes are compatible (both are undefined); and d) the categories of the two nodes are compatible. It remains to check for bar-level compatibility of the two structures. Since: a) the N'2 in structure (20b) dominates all the buffer input; b) the H-node in structure (19b) dominates no C-nodes; and c) N'x and N2 are compatible in bar level, the structures in (19b) and (20b) can be attached. The two structures are therefore attached by uniting N# x and N'2. The resultant structure is given in (2 1 ): Structure (21), the only possible attachm ent between the buffer and the stack, is placed back in the buffer, and the stack is popped. Since there is now nothing left on the stack, no further attachm ents are possible at this time. Structure (21) is thus shifted to the stack. The word red now enters the buffer as a C-node adjective phrase and an H-node noun phrase: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exam ple Attachm ents", "sec_num": "3.3.3." }, { "text": "( 21", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exam ple Attachm ents", "sec_num": "3.3.3." }, { "text": "A noun phrase is projected to an H -node clausal (or predicate) phrase since nouns may be the su b jects of predicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Sheiber (1986) for background on the possible uses of unification in particular gram m ar form alism s. 9 In fact, this constraint follow s from the two assum ptions: first, a com patibility check takes a certain am ount of processing time; and second, attach m en ts that take less tim e are preferred over those that take more tim e. See G ibson (forthcom ing) for further discussion..Tfl.___________In ta m a tin n a l P a r e in n W n rU c h n n 'PQ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It m ight seem that som e possible attach m en ts are being thrown away at this point. T h at is, in principle, there m ight be a structure that can only be form ed by attach ing a buffer site to a stack site where b oth sites have parent tree structures. This attachm ent would be blocked by (1 6 ). However, it turns out that any attach m en t that could have been formed by an attachm ent involving more than one lexical feature unification can always be arrived at by a different attach m en t involving a single lexical feature unification. For the proof, see G ibson (forthcom ing).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "An attachment between nodes (2 1 ) and ( 2 2 b) is now attempted. Requirements (17a)-(l7d) are satisfied and the requirement for bar-level compatibility is satisfied by the node labeled N3 in ( 2 1 ) together with N' in (2 2 b). Hence the structures are united, giving 23 Since (23) is the only possible attachment between the buffer and the stack, it is placed in the buffer and the stack is popped. Since the stack is now empty, structure (23) shifts to the stack. The noun b o o k now enters the buffer as both a C-node noun phrase and an H-node clausal phrase: Two attachments are possible at this point. The NP structure in (23) unites with each NP C-node on the stack, resulting in the structures in (25):Note that only one attachment per structure takes place in the final parse step. Crucially, no more attachments per structure take place when parsing the head of the noun phrase than when parsing the pre head constituents in the noun phrase.11 Thus, in contrast with the situation when nodes are only projected when their heads are encountered, the node projection and attachment algorithms described here predict that there should not be any slowdown when parsing the head of a head-final construction.The Dutch data described in Section 2.1 are handled in a similar manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "This paper has described a) a principle-based algorithm for the projection of phrasal nodes before their heads are parsed, and b) an algorithm for attaching the predicted nodes. It is worthwhile to compare the new projection algorithm with algorithms that do not project H-nodes. The projection algorithm provided here involves more work and hence, on the surface, may seem somewhat stipulative compared to one that does not project H-nodes. However, it turns out that although projecting -to H-nodes is more complicated than not doing so, attachm ent when H-nodes are not present is more complicated than attachm ent when they are present. That is, if a projection algorithm causes the projection of H-nodes, it will have a more complicated attachm ent algorithm. For example, if H-nodes are projected when parsing the noun phrase t h e w o m a n , the determiner the is immediately projected to an H-node noun phrase, which leads to a simple attachment. If H-nodes are not projected, then projection is easier, but attachm ent is that much more complicated. When attaching, it will be necessary to check if a determiner is an allowable specifier of a noun phrase: the same operation th at is performed when projecting to H-nodes. Thus although the complexity of particular components changes , the complexity of the entire parsing algorithm does not change, whether or not H-nodes are projected. Since the proposed projection and attachm ent algorithms make better empirical predictions than ones that do not predict structure, the new algorithms are preferred.Note that it is the num ber of attach m en ts per structure that is crucial here, and not the num ber o f total attach m en ts, since attachm ents m ade upon two independent structures may be perform ed in parallel, whereas attach m en ts m ade on the same structure m ust be perform ed serially. For exam ple, since structures (24a) w id (24b) are indep en dent, attach m en ts may e made to each of these in parallel. But if an attach m en t, B relies on the result of another attach m en t A, then attachm ent A must be perform ed first.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "4" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Licensing and Parsing'', Proceedings of the Seventeenth North East Linguistic Society Con ference", "authors": [ { "first": "", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney (1986), \"Licensing and Parsing'', Proceedings of the Seventeenth North East Linguistic Society Con ference, MIT, Cambridge, MA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Lectures on Government and Binding", "authors": [ { "first": "N", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chomsky, N. (1981), Lectures on Government and Binding, Foris, Dordrecht, The Netherlands.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Knowledge of Language: Its Nature, Origin and Use", "authors": [ { "first": "N", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chomsky, N. (1986a), Knowledge of Language: Its Nature, Origin and Use, Praeger Publishers, New York, NY.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Barriers, Linguistic Inquiry Monograph 13", "authors": [ { "first": "N", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chomsky, N. (1986b), Barriers, Linguistic Inquiry Monograph 13, MIT Press, Cambridge, MA.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Parallel Model for Adult Sentence Processing", "authors": [ { "first": "R", "middle": [], "last": "Clark", "suffix": "" }, { "first": "E", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the Tenth Cognitive Science Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, R. & Gibson, E. (1988), \"A Parallel Model for Adult Sentence Processing\" , Proceedings of the Tenth Cognitive Science Conference, McGill University, Montreal, Quebec.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Syntactic Processing Evidence from Dutch", "authors": [ { "first": "L", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 1987, "venue": "Natural Language and Linguistic Theory", "volume": "5", "issue": "", "pages": "519--559", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frazier, L. (1987) \"Syntactic Processing Evidence from Dutch\" , Natural Language and Linguistic Theory 5, pp. 519-559.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Garden-Path Effects m a Parser with Parallel Architecture, Eastern States Conference on Linguistics", "authors": [ { "first": "E", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, E. (1987), Garden-Path Effects m a Parser with Parallel Architecture, Eastern States Conference on Linguistics, Columbus Ohio.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Parsing with Principles: A Computational Theory of Human Sentence Process ing", "authors": [ { "first": "E", "middle": [], "last": "Gibson", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, E. (forthcoming), Parsing with Principles: A Computational Theory of Human Sentence Process ing, Ms., Carnegie Mellon University, Pittsburgh, PA.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Positing Gaps in a Parallel Parser", "authors": [ { "first": "E", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "R", "middle": [], "last": "Clark", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the Eighteenth North East Linguistic Society Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, E. k Clark, R. (1987), \"Positing Gaps in a Parallel Parser\" , Proceedings of the Eighteenth North East Linguistic Society Conference, University of Toronto, Toronto, Ontario.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "G o v e r n m e n t -Binding Parser for Warlpin, a Free Word Order Language, MIT M aster's Thesis", "authors": [ { "first": "M", "middle": [], "last": "Kashket", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kashket, M. (1987), G o v e r n m e n t -Binding Parser for Warlpin, a Free Word Order Language, MIT M aster's Thesis, Cambridge, MA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Connectedness and Binary Branching", "authors": [ { "first": "R", "middle": [], "last": "Kayne", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kayne, R. (1983) Connectedness and Binary Branching, Foris, Dordrecht, The Netherlands.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Theory of Syntactic Recognition for Natural Language", "authors": [ { "first": "M", "middle": [], "last": "Marcus", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, M. (1980), A Theory of Syntactic Recognition for Natural Language, MIT Press, Cambridge, MA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Parsing and and the Acquisition of Word Order", "authors": [ { "first": "E", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the Fourth Eastern States Conference on Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nyberg, E. (1987), \"Parsing and and the Acquisition of Word Order\" , Proceedings of the Fourth Eastern States Conference on Linguistics, The Ohio State University, Columbus, OH.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An Information-based Syntax and Semantics", "authors": [ { "first": "C", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1987, "venue": "CSLI Lecture Notes Number", "volume": "13", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pollard, C. k Sag, I. (1987) An Information-based Syntax and Semantics, CSLI Lecture Notes Number 13, Menlo Park, CA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Garden Path Phenomena and the Grammatical Basis of Language Processing", "authors": [ { "first": "B", "middle": [], "last": "Pritchett", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pritchett, B. (1987), Garden Path Phenomena and the Grammatical Basis of Language Processing, Harvard University Ph.D. dissertation, Cambridge, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An Introduction to Unification-based Approaches to Grammar", "authors": [ { "first": "S", "middle": [], "last": "Sheiber", "suffix": "" } ], "year": 1986, "venue": "CSLI Lecture Notes Number", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheiber, S. (1986) An Introduction to Unification-based Approaches to Grammar, CSLI Lecture Notes Number 4, Menlo Park, CA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Origins o f Phrase Structure", "authors": [ { "first": "T", "middle": [], "last": "S To Well", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S to well, T. (1981), Origins o f Phrase Structure, MIT Ph.D. dissertation.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Parsing with a GB Gram m ar", "authors": [ { "first": "E", "middle": [], "last": "Vvehrli", "suffix": "" } ], "year": 1988, "venue": "Natural Language Parsing and Linguistic Theones", "volume": "", "issue": "", "pages": "177--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "VVehrli, E. (1988), \"Parsing with a GB Gram m ar\" , in U. Reyle and C. Rohrer (eds.), Natural Language Parsing and Linguistic Theones, 177-201, Reidel, Dordrecht, the Netherlands.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": ".. dat [s [iVP het meisje [pp van Holland ]] [vp glimlachte ]] ... \"that the girl from Holland sm iled\" ... b. ... dat [5 [.vp het meisje ] [vp [ v [pp van Holland ] [v houdt ]]]]", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": ", G ibson & Clark (1987) and Clark k Gibson (1988)). T he C P P m odel is based on the principles of G overnm ent-B inding T heory (C hom sky (1981, 1986a)); crucially C P P has no separate module containing language-particular rules. Following Marcus (1980), structures parsed under the C P P m odel are placed on a stack and the m ost recently built structures are placed in a data structure called the buffer.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "The parser builds structure by attaching nodes in the buffer to nodes on top of the stack. Unlike Marcus model, the C P P m odel allows multiple representations for the sam e input string to exist in a buffer or stack cell. A lthough m ultiple representations for the sam e input string are perm itted, constraints on parallelism frequently cause one representation to be preferred over the others. M otivation for the parallel hypothesis comes from garden path effects and perception o f am biguity in addition to relative processing load effects.For information on the particular constraints and their motivations, seeGibson & Clark (1987), Clark & Gibson (1988 and the references cited in these papers.", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "he man shouts [pp to the woman] [cp that Ernie sees the rock] b. T h e m an shouts [cp that Ernie sees the rock] [pp to the woman]", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "he C P P algorithm is essentially very sim ple. A word is projected via node projection (see Section 3.2) into the buffer. If attach m en ts are possible between the buffer and the top o f the stack, then the results of these attachm ents are placed into the buffer and the stack is popped. A ttachm ents are attem p ted again until no longer possible. T h is entire procedure is repeated for each word in the input string. T h e formal C P P algorithm is given below: I. (Initializations) Set the stack to nil. Set the buffer to nil. 4 Ina m ore com plete theory, a syn tactic category would be determ ined from the them atic role (C hom sky (1986aondition)If the end of the input string has been reached and the buffer is em pty then return the contents of the stack and stop. If the buffer is em pty then project nodes for each lexical entry corresponding to the next word in the input string, and put this list o f m axim al projections into the buffer. Make all possible attachm ents between the stack and the buffer, subject to the attachm ent constraints (seeClark & Gibson (1988)). Put the attached structures in the buffer. If no attachm ents are possible, then put the contents of the buffer on top of the stack.5. Go to 2.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "a. [/vp [n 1 [jV rock ]]] b. [vp [v [v rock ]]] Next, the parser hypothesizes nodes whose heads may appear im m ediately to the right o f the given C-node. T hese predicted structures are called hypothesized nodes or H-nodes. An H-node is defined to be any node whose head is to .ae right of all lexical input. In order to determ ine which H-node structures to hypothesize from a given C-node, it is necessary to consult the argument properties associated with the Cn' ode together with the specifier and modifier properties of the nodal category and the word order properties of the language in question. It is assum ed that the ability of one category to act as specifier, modifier or argument of another category is part of unparameterized Universal Grammar. On the other hand, the relative order of two categories is assum ed to be parameterized across different languages. For exam ple, a determiner phrase, if it exists in a given language, is universally allowable as a specifier of a noun phrase. Whether the determ iner appears before or after its head noun depends on the language-particular values associated with the parameters that determ ine word order. Three param eters are proposed to account for variation in word order, one for each o f argument, specifier and modifier projections.5 For each language, each of these parameters is associated with at least one value, where the parameter values come from the following set: {*head*, *satellite*}.6 T he value head indicates that a category C causes the projection to the right of those categories for which C m ay be head. Thus this value indicates head-initial word order. T he value ^satellite* indicates that a category C causes the projection to the right of those categories for which C m ay be a satellite category. Hence this value indicates head-final word order. H-node projection from a category C is defined in ( 6 ). Specifier, Modifier) H -N ode Projection from category C: If the value associated with the (argu ment, specifier, modifier) projection parameter is *head*, then cause the projection of (argum ent, specifier, modifier) satellites, and attach th em to the right below the appropriate projection of C . If the value associ ated with the (argum ent, specifier, modifier) projection parameter is ^satellite*, then cause the projection of (argument, specifier, modifier) heads, and attach them to the right above the appropriate projection of C.", "type_str": "figure" }, "FIGREF6": { "num": null, "uris": null, "text": "5Furthermore, it is assum ed that the value of the m odifier projection param eter defaults to the value of the argument projection param eter. 61 will use the term satellite to ind icate non-head constituents: argum ents, specifiers and m odifiers. and attached. For example, the verb see subcategorizes for a noun phrase, so an em pty noun phrase node is hypothesized and attached as argument of the verb: projection parameter, on the other hand, is set to -the value ^satellite* in English so that specifiers appear before their heads. If the category associated with a C -node is an allowable specifier for other categories, then an H-node projection of each of these categories is built and the C-node specifier is attached to each. For exam ple, since a determiner may specify a noun phrase, an H-node noun phrase is hypothesized when parsing a determiner in English: node projection algorithm provides a new derivation of language-particular word order. In previous principle-based system s, word order is derived from parameterized direction of attachm ent (seeGibson & Clark (1987), N yberg (1987), VVehrli (1988)). An attachm ent takes place from buffer to stack in head-initial constructions and from stack to buffer in head-final constructions. Since attachm ent is now a uniform operation as defined in ( 1 7 ) , this parameterization is no longer necessary. Instead, in headinitial constructions, nodes now project to the nodes that they may im m ediately dom inate. In head-final constructions, nodes now project to those nodes that they m ay be im m ediately dom inated by.T he projection parameters as defined in ( 6) account for many facts about word order across languages. However, m ost, if not all, languages have cases that do not fit this clean picture. For exam ple, while modifiers in English are predom inantly post-head, adjectives appear before the head. A single global value for modifier projection predicts that this situation is impossible. Hence we must assum e that the values given for the projection param eters are only default values. In order to formalize this idea, I assume the existence of a hierarchy of categories and words as.show n below: ed that the value for each o f the projection parameters is the default value for that projection type w ith respect to a particular language. However, a particular category or word may have a value associated w ith it for a projection param eter in addition to the default one. If this is the case, then only the m ost specific value is used. For exam ple, in English, the category adjective is associated with the value ^satellite* w ith respect to modifier projection. T hus English adjectives appear before the head. The adjective tall will therefore cause the projection of both a C -node adjective phrase and an HIf recursive application o f projection to H-nodes were allowed, then it would be possible, in principle, to project an infinite number o f nodes from a single lexical entry. In English, for exam ple, a genitive noun phrase can specify another noun phrase. T h is noun phrase m ay also be a genitive noun phrase, and so on. If H-nodes could project to further H-nodes, then it would be necessary to hypothesize an infinite number of genitive N P H -nodes for every genitive N P that is read. As a result of this difficulty, the H-node Projection C onstraint is proposed: The H-node Projection Constraint: Only a C-node may cause the projection of an H-node.", "type_str": "figure" }, "FIGREF7": { "num": null, "uris": null, "text": "Since structures are predicted by the node projection algorithm, the attachm ent algorithm must dictate how subsequent structures match these predictions. Consider the following two exam ples from English: the first is an exam ple of specifier attachm ent; the second is an exam ple of argument attachm ent. In English, specifiers precede the head and argum ents follow the head. It is desirable for the attachm ent algorithm to handle both kinds of attachm ents w ithout word order particular stipulations. First, suppose that the word the is on the stack as both a determiner phrase and an H-node noun phrase. Furthermore, suppose that the word woman is projected into the buffer as both a noun phrase and an Hent algorithm should allow two attachm ents at this point: the H-node N P on the stack uniting with each N P C -node in the buffer. It might also seem reasonable to allow the bare determiner phrase to attach directly as specifier of each noun phrase. However, this kind of attachm ent is undesirable for two reasons. First of all, it makes the attachm ent operation a disjunctive operation: an attachm ent would involve either m atching an H-node or m eeting the satellite requirements of a category. Second of all, it makes H-node projection unnecessary in most situations and therefore som ew hat stipulative. T h at is, allowing a disjunctive attachm ent operation would permit m any derivations that never use an H-node, so that the need for H-nodes would be restricted to head-final constructions with pre-head satellites (see Section 2). It is therefore desirable for all attachm ents to involve m atching an H-node. beside is on the stack and the noun Fmnk is represented in the buffer as a noun phrase and a clausal phrase:", "type_str": "figure" }, "FIGREF8": { "num": null, "uris": null, "text": "(e.g. t lead of a predicate), but is not specified for a particular category subsum es a structure having the categorv erb, since verbs are predicative and thus clausal categories. Hence structure (15a) subsum es structure (15b): operation is the least upper bound operator in the subsum ption ordering on information in a structure. Since structure (15a) subsum es structure (15b), the result of unifying structure (15a) with structure (15b) is structure (15b). T w o structures are compatible if the unification o f the two structures is non-nil. T h e inform ation on a structure that is relevant to attachm ent consists of the n o d e's bar level (e.g., zero level, interm ediate or m axim al), and the n o d e's lexical features, (e.g. category, case, etc).", "type_str": "figure" }, "FIGREF9": { "num": null, "uris": null, "text": "ivp [DetP the ] Lv' [/v e ]]]", "type_str": "figure" }, "FIGREF10": { "num": null, "uris": null, "text": "n p [ n ' [a p b ig ] [n 1 [/v \u00ab ]]]]", "type_str": "figure" }, "FIGREF11": { "num": null, "uris": null, "text": ") a. [AP red ] b. [ n p [ n ; [a p r e d ] [ n ' [ n \u00ab ]]]]", "type_str": "figure" }, "TABREF1": { "html": null, "text": "The tree structure above Y is compatible with the tree structure above Z, subject to the attachment constraint in (16); d. The lexical features of structure Y are compatible with the lexical features of structure Z; e. Structure Y is bar-level compatible with structure Z. U in the buffer is bar-level compatible with a structure V on the stack iff all of (a), (b) and (c) are true: a. Structure U contains a node, S, such that S dominates all lexical material in U ; b. Structure V contains an H-node structure, T, that dominates no lexical material; c. The bar level of 5 is compatible with the bar level of T .", "num": null, "content": "
and (a)
are true:
a.Structure W contains a maximal projection node, Y , such that Y dominates all lexical material in W \\
b.Structure X contains a maximal projection H-node structure, Z;
c. Bar-level compatibility is defined in (18):
(18)
A structure
", "type_str": "table" } } } }