{ "paper_id": "1991", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:36:59.189882Z" }, "title": "A HYBRID MODEL OF HUMAN SENTENCE PROCESSING: PARSING RIGHT-BRANCHING, CENTER EMBEDDED AND CROSS-SERIAL DEPENDENCIES", "authors": [ { "first": "Theo", "middle": [], "last": "Vosse", "suffix": "", "affiliation": {}, "email": "vosse@psych.kun.nl" }, { "first": "Gerard", "middle": [], "last": "Kempen", "suffix": "", "affiliation": {}, "email": "kempen@psych.kun.nl" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A new cognitive architecture for the syntactic aspects of human sentence processing (called Unification Space) is tested against experimental data from human subjects. The data, originally collected by Bach, Brown and Marslen-Wilson (1986), concern the comprehensibility of verb dependency construc tions in Dutch and German: right-branching, center embedded, and cross-serial dependencies of one to four levels deep. A satisfactory fit is obtained be tween comprehensibility data and parsability scores in the model.", "pdf_parse": { "paper_id": "1991", "_pdf_hash": "", "abstract": [ { "text": "A new cognitive architecture for the syntactic aspects of human sentence processing (called Unification Space) is tested against experimental data from human subjects. The data, originally collected by Bach, Brown and Marslen-Wilson (1986), concern the comprehensibility of verb dependency construc tions in Dutch and German: right-branching, center embedded, and cross-serial dependencies of one to four levels deep. A satisfactory fit is obtained be tween comprehensibility data and parsability scores in the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "construction typ es and depths ( 1 = very easy, 9 = very hard).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. Comprehensibility ratings fo r various", "sec_num": null }, { "text": "In a recent paper (Kempen & Vosse, 1990) , we have proposed a new cognitive architecture for the syntac tic aspects of human sentence processing. The model is 'hybrid' in the sense that it combines symbolic structures (parse trees) with non-symbolic processing (simulated annealing). The computer model of this architecture -called Unification Space -is capable of simulating well-known psycholinguistic sentence understanding phenomena such as the effects of Minimal Attachment, Right Association and Lexical Ambiguity (cf. Frazier, 1987) .", "cite_spans": [ { "start": 18, "end": 40, "text": "(Kempen & Vosse, 1990)", "ref_id": "BIBREF8" }, { "start": 525, "end": 539, "text": "Frazier, 1987)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "In this paper we test the Unification Space archi tecture against a set of psycholinguistic data on the difficulty of understanding three types of verb depen dency constructions of various levels of embedding 1 1 A recent paper by Joshi (1990) The data were collected by Bach, Brown and Marslen-Wilson (1986) and concern comprehensibil ity ratings of cross-serial, center-embedded and right branching constructions as illustrated by (1). Subjects rated two types of verb dependencies: right-branching and either center-embedded (German) or cross-serial (Dutch) dependencies. obtained comprehensibility (or rather, incomprehen sibility) ratings for four 'levels' (the term level refers to the depth of embedding; level 1: one clause, with out embeddings; level 2: two clauses, one embedded in the other as in (1), etc.). Notice that the (Dutch) crossed dependencies were consistently rated easier to understand than the (German) nested dependen cies. From level 3 onward, the right-branching struc tures were judged easier than their crossed or nested counterparts. Via a question-answering task Bach et al. verified that the comprehensibility ratings indeed reflect processing loads (real difficulties in compre hension).", "cite_spans": [ { "start": 231, "end": 243, "text": "Joshi (1990)", "ref_id": "BIBREF6" }, { "start": 271, "end": 308, "text": "Bach, Brown and Marslen-Wilson (1986)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "In Section 2 we outline briefly the type of gram mar we use to represent syntactic structures. The parsing mechanism capable of building such struc tures is described in Section 3. Section 4 is devoted to design and results of the computer simulation. In Section 5, finally, we evaluate our results and draw some comparisons with alternative computational models proposed in the psycholinguistic literature. (1987) introduced Segment Grammar as a formalism for generating syntactic trees out of sos The basic tree formation operation is unification of the feature matrices of nodes which carry the same category label. In Figure 3 successful unification has been visualized as the merger of the corresponding nodes.", "cite_spans": [ { "start": 408, "end": 414, "text": "(1987)", "ref_id": null } ], "ref_spans": [ { "start": 622, "end": 630, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Dependency typ e", "sec_num": null }, { "text": "s ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Henceforth, we denote the test-tube by the term Unification Space. Words recognized in the input string are immediately looked up in the mental lexi con and the lexical entry listed there is immediately entered into the Unification Space. In case of an am biguous input word, all entries are fed into the system simultaneously.", "sec_num": null }, { "text": "'temperature' variable T which decreases gradually according to the 'annealing schedule' which has been determined beforehand. We define a parameter E (for global Excitation) whose function is similar to that of temperature. However, E' s value does not decrease monotonically -as prescribed by some annealing schedule but is proportional to the summed activations of all nodes that currently populate the Unification Space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "On the other hand, unified nodes may break up, with probability p B\u2022 This probability increases accordingly as the activation of the nodes and/or their grammatical goodness of fit decrea\ufffde. One consequence of this scheme is a bias in favor of semantically and syntactically well-formed syntactic trees encompassing recent nodes. \u2022 Global excitation. Due to the spontaneous decay of node activation and the concomitant rising PB, all unifications would ultimately be annulled in the ab sence of a mechanism for intercepting and 'freez ing' high-quality parse trees. In standard versions of simulated annealing one obtains this effect by making both p u and p B dependent on a global", "sec_num": null }, { "text": "The relation between E on one hand and p u and p B on the other is such that, after E has fallen below a threshold value ('freezing'), no unifications are attempted anymore nor can unified nodes become dissociated. If the resulting confonnation consists of exactly one tree, the parsing process is said to have succeeded. If several disconnected, partial trees result, the parsing has failed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "On the other hand, unified nodes may break up, with probability p B\u2022 This probability increases accordingly as the activation of the nodes and/or their grammatical goodness of fit decrea\ufffde. One consequence of this scheme is a bias in favor of semantically and syntactically well-formed syntactic trees encompassing recent nodes. \u2022 Global excitation. Due to the spontaneous decay of node activation and the concomitant rising PB, all unifications would ultimately be annulled in the ab sence of a mechanism for intercepting and 'freez ing' high-quality parse trees. In standard versions of simulated annealing one obtains this effect by making both p u and p B dependent on a global", "sec_num": null }, { "text": "It is important to note that the workings of the Unification Space prevent the parallel growth of multiple parse trees spanning the same input string. In other words, structural (syntactic) ambiguity is not reflected by multiple parse trees. Only in case of lexical ambiguity can there be parallel activation of several segments or subtrees. This agrees with the picture emerging from the psycholinguistic literature (cf. the survey by Rayner & Pollatsek, 1989) .", "cite_spans": [ { "start": 436, "end": 461, "text": "Rayner & Pollatsek, 1989)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "On the other hand, unified nodes may break up, with probability p B\u2022 This probability increases accordingly as the activation of the nodes and/or their grammatical goodness of fit decrea\ufffde. One consequence of this scheme is a bias in favor of semantically and syntactically well-formed syntactic trees encompassing recent nodes. \u2022 Global excitation. Due to the spontaneous decay of node activation and the concomitant rising PB, all unifications would ultimately be annulled in the ab sence of a mechanism for intercepting and 'freez ing' high-quality parse trees. In standard versions of simulated annealing one obtains this effect by making both p u and p B dependent on a global", "sec_num": null }, { "text": "We now describe the essence of the computer im plementation of the Unification Space model. Mathematical details can be found in Kempen & Vosse (1989).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "On the other hand, unified nodes may break up, with probability p B\u2022 This probability increases accordingly as the activation of the nodes and/or their grammatical goodness of fit decrea\ufffde. One consequence of this scheme is a bias in favor of semantically and syntactically well-formed syntactic trees encompassing recent nodes. \u2022 Global excitation. Due to the spontaneous decay of node activation and the concomitant rising PB, all unifications would ultimately be annulled in the ab sence of a mechanism for intercepting and 'freez ing' high-quality parse trees. In standard versions of simulated annealing one obtains this effect by making both p u and p B dependent on a global", "sec_num": null }, { "text": "During each cycle, one iteration\u2022 of the basic algo rithm is carried out. This process stops when E has fallen below the threshold value. However, if a word has already been dropped from the input buffer, its lexical entry is not reentered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time is sliced up into intervals of equal duration.", "sec_num": "1." }, { "text": "In our earlier study we obtained satisfactory simulation results for the sentences in (2). The sole difference between the three grammars rests in their word order constraints. Center-embed dings require the embedded subtree to be positioned inbetween the branches of the embedding S. (The constraints for both other grammars are easy to devise.) However, there was no need to have the Unification Space actually check word order constraints because we never used input strings which contained more than one pair of brackets of the same type (e.g. '{} {} ') and/or more than one type of embedding (e.g. '[<>] {} '). Thus word order con straints are in effect encoded in the bracket type feature. The actual simulations were run with 5 (levels) times 3 (dependency types) equals 15 different input strings. Each string was fed into the Unification Space 400 times. The parameter settings were exactly equal to those used in the earlier Kempen & Vosse ( 1989) There are also differences between the human data and computer simulation, however. First of all, the comprehension scores for the three dependency types fan out more rapidly in our simulation than in the human subjects. Second, in the human data the first signs of a differentiation between sentence types manifest themselves already at level 2, whereas in our simulation the percentages start diverging at level 3 only. From our previous study we know that the Unification Space is rather sensitive to sentence length. If this applies to human readers as well, we could argue that our level 1 and level 2 scores are too ", "cite_spans": [ { "start": 935, "end": 957, "text": "Kempen & Vosse ( 1989)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE SIMULATION STUDY", "sec_num": null }, { "text": "The simulation revealed a satisfactory fit between the empirical pattern of comprehensibility ratings observed by Bach \u2022 et al. and parsability by the Unification Space. Since the model applied exactly the same grammar when processing the three types of dependencies, it follows that the empirical pattern can be explained in terms of the diff erent sp atial temporal arrangements between the members of a dependency pair. No additional assumptions about differences between the syntactic structure underlying the three types of dependencies are needed.", "cite_spans": [ { "start": 114, "end": 127, "text": "Bach \u2022 et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "DISCUSSION", "sec_num": null }, { "text": "To what extent are alternative computational models of human sentence processing capable of accounting for the empirical pattern? So far , Joshi' s (1990) proposal is the only one reported in the literature. However, it is not clear how well this model behaves with respect to other psycholinguistic sentence processing phenomena such as Right Association, Minimal Attachment, Verb Frame Preferences and the like. Two other recent models (Gibson, 1990a,b,c; McRoy & Hirst, 1990 ) do address the latter phenomena but they pay no attention to cross-serial dependencies. So, as far as we know, there is no competing model of comparable wide coverage.", "cite_spans": [ { "start": 139, "end": 154, "text": "Joshi' s (1990)", "ref_id": null }, { "start": 438, "end": 457, "text": "(Gibson, 1990a,b,c;", "ref_id": null }, { "start": 458, "end": 477, "text": "McRoy & Hirst, 1990", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "DISCUSSION", "sec_num": null }, { "text": "These numbers have been computed as described in foot note 3 below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Crossed and nested dependencies in German and Dutch: a psycholinguistic study", "authors": [ { "first": "Emmon", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Brown", "suffix": "" }, { "first": "&", "middle": [], "last": "William Marslen Wilson", "suffix": "" } ], "year": 1986, "venue": "Language and Cognitive Processes", "volume": "1", "issue": "", "pages": "249--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bach, Emmon, Colin Brown & William Marslen Wilson (1986). Crossed and nested dependencies in German and Dutch: a psycholinguistic study. Language and Cognitive Processes, 1, 249-262.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Incremental sentence generation: Representational and computational aspects", "authors": [ { "first": "De", "middle": [], "last": "Smedt", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Smedt, Koen (1990). Incremental sentence generation: Representational and computational aspects. Ph.D. thesis, University of Nijmegen.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Theories of sentence processing", "authors": [ { "first": "Lyn", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 1987, "venue": "Modularity in -knowledge representation and natural language understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frazier, Lyn (1987). Theories of sentence processing. In: Jay L. Garfield (Ed.), Modularity in -knowledge representation and natural language understanding. Cambridge, MA: M.I.T. Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Memory capacity and sentence processing", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 28th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, Edward (1990a). Memory capacity and sentence processing. In: Proceedings of the 28th Annual Meeting of the ACL, Pittsburgh.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Recency preferences and garden-path effects", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 12th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, Edward (1990b). Recency preferences and garden-path effects. In: Proceedings of the 12th Annual Conference of the Cognitive Science Society, Cambridge MA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A computational theory of processing overload and garden-path effects", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 13th international Conference on Computational Linguistics (COLING-90)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibson, Edward (1990c). A computational theory of processing overload and garden-path effects. Proceedings of the 13th international Conference on Computational Linguistics (COLING-90), Helsinki.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Processing crossed and nested dependencies: an automaton perspective on the psycholinguistic results", "authors": [ { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 1990, "venue": "Language and Cognitive Processes", "volume": "5", "issue": "1", "pages": "249--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, Aravind K. (1990). Processing crossed and nested dependencies: an automaton perspective on the psycholinguistic results .. Language and Cognitive Processes, 5(1 ), 249-262.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A framework for in cremental syntactic tree formation", "authors": [ { "first": "Gerard", "middle": [], "last": "Kempen", "suffix": "" } ], "year": 1987, "venue": "Proceedings of the 10th International Joint Conference on Artificial Intelligence (IJCAI-87)", "volume": "", "issue": "", "pages": "655--660", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kempen, Gerard (1987). A framework for in cremental syntactic tree formation. In: Proceedings of the 10th International Joint Conference on Artificial Intelligence (IJCAI-87), Milan, 655-660.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Incremental syntactic tree formation in human sentence processing: an interactive architecture based on activation decay and simulated annealing", "authors": [ { "first": "Gerard & Theo", "middle": [], "last": "Kempen", "suffix": "" }, { "first": "", "middle": [], "last": "Vosse", "suffix": "" } ], "year": 1990, "venue": "Connection Science", "volume": "1", "issue": "", "pages": "273--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kempen, Gerard & Theo Vosse (1990). Incremental syntactic tree formation in human sentence processing: an interactive architecture based on activation decay and simulated annealing. Connection Science, 1, 273-290.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Race-based parsing and syntactic disambiguation", "authors": [ { "first": "Susan & Graeme", "middle": [], "last": "Mcroy", "suffix": "" }, { "first": "", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "", "pages": "313--353", "other_ids": {}, "num": null, "urls": [], "raw_text": "McRoy, Susan & Graeme Hirst. (1990). Race-based parsing and syntactic disambiguation. Cognitive Science, 14, 313-353.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The psychology of reading", "authors": [ { "first": "Keith & Alexander", "middle": [], "last": "Rayner", "suffix": "" }, { "first": "", "middle": [], "last": "Pollatsek", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rayner, Keith & Alexander Pollatsek (1989). The psychology of reading. Englewood Cliffs, NJ: Prentice-Hall.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A stochastic approach to parsing", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1986, "venue": "Proceedings of the 11 th International Conference on Computational Linguistics (COLING-86)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sampson, Geoffrey ( 1986). A stochastic approach to parsing. In: Proceedings of the 11 th International Conference on Computational Linguistics (COLING-86), Bonn.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Various typ es of synactic segments.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Building a tree through unification. called segments. A segment is a node-arc-node triple, the top node being called 'root' and the bottom node 'foot'. Both root and foot nodes are labeled by a syntactic category (e.g. S, NP) and have associated with them a matrix of features (i.e., attribute-value pairs). Arc labels represent grammatical functions. See Figure 2 for some examples. All syntactic knowledge a segment needs (including ordering rules)is represented in features.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Lexical entry fo r the transitive verb 'eat' .The following principles control the events in the Unification Space (see Kempen & Vosse, 1989, for details):\u2022 Activation decay. When the nodes are entered into the Unification Space they are assigned an initial activation level by their lexicon entry. This activa tion level decays over time.\u2022 Stochastic parse tree optimization. Generally, on the basis of its feature composition, a node could unify with several other nodes present in th\ufffd Unification Space. In order to make the best possible choice, Simulated Annealing is used as a stochastic opti mization technique (cf. Sampson, 1986). If two nodes can unify, they actually unify with pro bability pu. This probability depends, among others, on the activation level of both nodes and onthe grammatical 'goodness of fit'. Various syntactic and semantic factors are at stake here. Among the former are word order constraints. For instance, if during the analysis of He gave that girl a dollar the article a would attempt to unify with the noun girl, this would cause violation of a word order rule and drastically reduce the value of Pu-Assigning a dollar the role of indirect object would be evaluated as less good than as direct object, both for syntactic and semantic reasons.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "the input sentence are stored in an input buffer for a limited period of time, T B . Individual words are read out from left to right at fixed intervals\u2022 T w << T B . Their corresponding lexi cal entries are immediately entered into the or not it will dissociate from its unification partner (if any). This event takes place with probability p B which correlate _ s negatively with the activation level. Whenever lexi cal segments are are involved in a break-up (lexical segments have word classes rather than phrases as their foot labels), their lexical entries are reentered into the Unification Space without delay. Thus they are given a new chance to find a suitable unification partner. The activation levels of reentering nodes are reset to the initial value stored in the lexicon.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "The rat the cat chased escaped. (2b) The cat chased the rat that escaped. (2c) The rat the cat the dog bit chased escaped. (2d) The dog bit the cat that chased the rat that escapedhave devised simple artificial grammars which generate right-branching, center-embedded and cross-serial dependencies among pairs of opening and closing brackets, e.g. 'O{)', '{{))' or '(()}'. The grammars contain two types of lexical segments (with arc labels Left and Right) and one optional type of non-lexical segments with arc label Mod. The number of Mod segments dominated by an S node is either zero or one The optional Mod segment is attached to the lexical entries of opening brackets as depicted in Figure 5. It is the Mod segments that give the", "uris": null, "num": null, "type_str": "figure" }, "FIGREF6": { "text": "Segments of the grammar, and the lexical entries fo r '(' and ') '.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF7": { "text": "paper 3 \u2022 No attempts have been made to find a set of parameter values yielding a better fit with Bach et al.'s empirical data. parameter C (not discussed in the present paper) we had four different values: .1, .2, . 3 and .4. There were 100 runs for each value of C. InFigure 7we show percentages averaged over C values.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF8": { "text": "Percentages of correctly parsed strings fo r three types of dependency and five levels of depth.good (in Bach et al. 's study, these levels were tested through sentence of 6 to 8 words long).", "uris": null, "num": null, "type_str": "figure" } } } }