{ "paper_id": "D09-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:18.770416Z" }, "title": "Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "", "affiliation": { "laboratory": "", "institution": "Oregon Health & Science University", "location": {} }, "email": "roark@cslu.ogi.edu" }, { "first": "Asaf", "middle": [], "last": "Bachrach", "suffix": "", "affiliation": { "laboratory": "INSERM-CEA Cognitive Neuroimaging Unit", "institution": "", "location": { "settlement": "Gif sur Yvette", "country": "France" } }, "email": "" }, { "first": "Carlos", "middle": [], "last": "Cardenas", "suffix": "", "affiliation": {}, "email": "cardenas@mit.edu" }, { "first": "Christophe", "middle": [], "last": "Pallier", "suffix": "", "affiliation": { "laboratory": "INSERM-CEA Cognitive Neuroimaging Unit", "institution": "", "location": { "settlement": "Gif sur Yvette", "country": "France" } }, "email": "christophe@pallier.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.", "pdf_parse": { "paper_id": "D09-1034", "_pdf_hash": "", "abstract": [ { "text": "A number of recent publications have made use of the incremental output of stochastic parsers to derive measures of high utility for psycholinguistic modeling, following the work of Hale (2001; 2003; 2006). In this paper, we present novel methods for calculating separate lexical and syntactic surprisal measures from a single incremental parser using a lexicalized PCFG. We also present an approximation to entropy measures that would otherwise be intractable to calculate for a grammar of that size. Empirical results demonstrate the utility of our methods in predicting human reading times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Assessment of linguistic complexity has played an important role in psycholinguistics and neurolinguistics for a long time, from the use of mean length of utterance and related scores in child language development (Klee and Fitzgerald, 1985) , to complexity scores related to reading difficulty in human sentence processing studies (Yngve, 1960; Frazier, 1985; Gibson, 1998) . Operationally, such linguistic complexity scores are derived via deterministic manual (human) annotation and scoring algorithms of language samples. Natural language processing has been employed to automate the extraction of such measures (Sagae et al., 2005; Roark et al., 2007) , which can have high utility in terms of reduction of time required to annotate and score samples. More interestingly, however, novel data driven methods are being increasingly employed in this sphere, yielding language sample characterizations that require NLP in their derivation. For example, scores derived from variously estimated language models have been used to evaluate and classify language samples associated with neurodevelopmen-tal or neurodegenerative disorders (Roark et al., 2007; Solorio and Liu, 2008; Gabani et al., 2009) , as well as within general studies of human sentence processing (Hale, 2001; . These scores cannot feasibly be derived by hand, but rather rely on large-scale statistical models and structured inference algorithms to be derived. This is quickly becoming an important application of NLP, making possible new methods in the study of human language processing in both typical and impaired populations.", "cite_spans": [ { "start": 214, "end": 241, "text": "(Klee and Fitzgerald, 1985)", "ref_id": "BIBREF24" }, { "start": 332, "end": 345, "text": "(Yngve, 1960;", "ref_id": "BIBREF37" }, { "start": 346, "end": 360, "text": "Frazier, 1985;", "ref_id": "BIBREF12" }, { "start": 361, "end": 374, "text": "Gibson, 1998)", "ref_id": "BIBREF15" }, { "start": 616, "end": 636, "text": "(Sagae et al., 2005;", "ref_id": "BIBREF32" }, { "start": 637, "end": 656, "text": "Roark et al., 2007)", "ref_id": "BIBREF29" }, { "start": 1134, "end": 1154, "text": "(Roark et al., 2007;", "ref_id": "BIBREF29" }, { "start": 1155, "end": 1177, "text": "Solorio and Liu, 2008;", "ref_id": "BIBREF33" }, { "start": 1178, "end": 1198, "text": "Gabani et al., 2009)", "ref_id": "BIBREF13" }, { "start": 1264, "end": 1276, "text": "(Hale, 2001;", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of broad-coverage parsing for psycholinguistic modeling has become very popular recently. Hale (2001) suggested a measure (surprisal) derived from an Earley (1970) parser using a probabilistic context-free grammar (PCFG) for psycholinguistic modeling; and in later work (Hale, 2003; he suggested an alternate parser-derived measure (entropy reduction) that may also account for some human sentence processing performance. Recent work continues to advocate surprisal in particular as a very useful measure for predicting processing difficulty (Boston et al., 2008a; Boston et al., 2008b; Demberg and Keller, 2008; Levy, 2008) , and the measure has been derived using a variety of incremental (left-to-right) parsing strategies, including an Earley parser (Boston et al., 2008a) , the Roark (2001) incremental top-down parser (Demberg and Keller, 2008) , and an n-best version of the Nivre et al. (2007) incremental dependency parser (Boston et al., 2008a; 2008b) . Deriving such measures by hand, even for a relatively limited set of stimuli, is not feasible, hence parsing plays a critical role in this developing psycholinguistic enterprise.", "cite_spans": [ { "start": 98, "end": 109, "text": "Hale (2001)", "ref_id": "BIBREF17" }, { "start": 158, "end": 171, "text": "Earley (1970)", "ref_id": "BIBREF11" }, { "start": 278, "end": 290, "text": "(Hale, 2003;", "ref_id": "BIBREF18" }, { "start": 550, "end": 572, "text": "(Boston et al., 2008a;", "ref_id": "BIBREF4" }, { "start": 573, "end": 594, "text": "Boston et al., 2008b;", "ref_id": "BIBREF5" }, { "start": 595, "end": 620, "text": "Demberg and Keller, 2008;", "ref_id": "BIBREF10" }, { "start": 621, "end": 632, "text": "Levy, 2008)", "ref_id": "BIBREF25" }, { "start": 748, "end": 784, "text": "Earley parser (Boston et al., 2008a)", "ref_id": null }, { "start": 791, "end": 803, "text": "Roark (2001)", "ref_id": "BIBREF30" }, { "start": 832, "end": 858, "text": "(Demberg and Keller, 2008)", "ref_id": "BIBREF10" }, { "start": 890, "end": 909, "text": "Nivre et al. (2007)", "ref_id": "BIBREF28" }, { "start": 940, "end": 962, "text": "(Boston et al., 2008a;", "ref_id": "BIBREF4" }, { "start": 963, "end": 969, "text": "2008b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is no single measure that can account for all of the factors influencing human sentence processing performance, and some of the most recent work on using parser-derived measures for psycholinguistic modeling has looked to try to derive multiple, complementary measures. One of the key distinctions being looked at is syntactic versus lexical expectations (Gibson, 2006) . For example, in Demberg and Keller (2008) , trials were run deriving surprisal from the Roark (2001) parser under two different conditions: fully lexicalized parsing, and fully unlexicalized parsing (to pre-terminal part-of-speech tags). Boston et al. (2008a) capture a similar distinction by making use of an unlexicalized PCFG within an Earley parser and a fully lexicalized unlabeled dependency parser (Nivre et al., 2007) . As Demberg and Keller (2008) point out, fully unlexicalized grammars ignore important lexico-syntactic information when deriving the \"syntactic\" expectations, such as subcategorization preferences of particular verbs, which are generally accepted to impact syntactic expectations in human sentence processing (Garnsey et al., 1997) . Demberg and Keller argue, based on their results, for unlexicalized surprisal instead of lexicalized surprisal. Here we present a novel method for deriving separate syntactic and lexical surprisal measures from a fully lexicalized incremental parser, to allow for rich probabilistic grammars to be used to derive either measure, and demonstrate the utility of this method versus that of Demberg and Keller in empirical trials.", "cite_spans": [ { "start": 361, "end": 375, "text": "(Gibson, 2006)", "ref_id": "BIBREF16" }, { "start": 394, "end": 419, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" }, { "start": 466, "end": 478, "text": "Roark (2001)", "ref_id": "BIBREF30" }, { "start": 783, "end": 803, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF28" }, { "start": 809, "end": 834, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" }, { "start": 1115, "end": 1137, "text": "(Garnsey et al., 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of large-scale lexicalized grammars presents a problem for using an Earley parser to derive surprisal or for the calculation of entropy as Hale (2003; defines it, because both methods require matrix inversion of a matrix with dimensionality the size of the non-terminal set. With very large lexicalized PCFGs, the size of the nonterminal set is too large for tractable matrix inversion. The use of an incremental, beam-search parser provides a tractable approximation to both measures. Incremental top-down and left-corner parsers have been shown to effectively (and efficiently) make use of non-local features from the left-context to yield very high accuracy syntactic parses (Roark, 2001; Henderson, 2003; Collins and Roark, 2004) , and we will use such rich models to derive our scores.", "cite_spans": [ { "start": 147, "end": 158, "text": "Hale (2003;", "ref_id": "BIBREF18" }, { "start": 686, "end": 699, "text": "(Roark, 2001;", "ref_id": "BIBREF30" }, { "start": 700, "end": 716, "text": "Henderson, 2003;", "ref_id": "BIBREF20" }, { "start": 717, "end": 741, "text": "Collins and Roark, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to teasing apart syntactic and lexical surprisal (defined explicitly in \u00a73), we present an approximation to the full entropy that Hale (2003; used to define the entropy reduction hypothesis. Such an entropy measure is derived via a predictive step, advancing the parses independently of the input, as described in \u00a73.3. We also present syntactic and lexical alternatives for this measure, and demonstrate the utility of making such a dis-tinction for entropy as well as surprisal.", "cite_spans": [ { "start": 142, "end": 153, "text": "Hale (2003;", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The purpose of this paper is threefold. First, to present a careful and well-motivated decomposition of lexical and syntactic expectation-based measures from a given lexicalized PCFG. Second, to explicitly document methods for calculating these and other measures from a specific incremental parser. And finally, to present some empirical validation of the novel measures from real reading time trials. We modified the Roark (2001) parser to calculate the discussed measures 1 , and the empirical results in \u00a74 show several things, including: 1) using a fully lexicalized parser to calculate syntactic surprisal and entropy provides higher predictive utility for reading times than these measures calculated via unlexicalized parsing (as in Demberg and Keller); and 2) syntactic entropy is a useful predictor of reading time.", "cite_spans": [ { "start": 419, "end": 431, "text": "Roark (2001)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A probabilistic context-free grammar (PCFG) G = (V, T, S \u2020 , P, \u03c1) consists of a set of nonterminal variables V ; a set of terminal items (words) T ; a special start non-terminal S \u2020 \u2208 V ; a set of rule productions P of the form A \u2192 \u03b1 for A \u2208 V , \u03b1 \u2208 (V \u222a T ) * ; and a function \u03c1 that assigns probabilities to each rule in P such that for any given non-terminal symbol X \u2208 V , \u03b1 \u03c1(X \u2192 \u03b1) = 1. For a given rule A \u2192 \u03b1 \u2208 P , let the function RHS return the right-hand side of the rule, i.e., RHS(A \u2192 \u03b1) = \u03b1. Without loss of generality, we will assume that for every rule A \u2192 \u03b1 \u2208 P , one of two cases holds: either RHS(A \u2192 \u03b1) \u2208 T or RHS(A \u2192 \u03b1) \u2208 V * . That is, the right-hand side sequences consist of either (1) exactly one terminal item, or (2) zero or more non-terminals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "Let W \u2208 T n be a terminal string of length n, i.e., W = W We can define a \"derives\" relation (denoted \u21d2 G for a given PCFG G) as follows: \u03b2A\u03b3 \u21d2 G \u03b2\u03b1\u03b3 if and only if A \u2192 \u03b1 \u2208 P . A string W \u2208 T * is in the language of a grammar G if and only if S \u2020 + \u21d2 G W , i.e., a sequence of one or more derivation steps yields the string from the start non-terminal. A leftmost derivation begins with S \u2020 and each derivation step replaces the leftmost non-terminal A in the yield with some \u03b1 such that A \u2192 \u03b1 \u2208 P . For a leftmost derivation S \u2020 * \u21d2 G \u03b1, where \u03b1 \u2208 (V \u222a T ) * , the sequence of derivation steps that yield \u03b1 can be represented as a tree, with the start symbol S \u2020 at the root, and the \"yield\" sequence \u03b1 at the leaves of the tree. A complete tree has only terminal items in the yield, i.e., \u03b1 \u2208 T * ; a partial tree has some non-terminal items in the yield. With a leftmost derivation, the yield \u03b1 = \u03b2\u03b3 partitions into an initial sequence of terminals \u03b2 \u2208 T * followed by a sequence of non-terminals \u03b3 \u2208 V * . For a complete derivation, \u03b3 = ; for a partial derivation \u03b3 \u2208 V + , i.e., one or more non-terminals. Let T (G, W [1, i]) be the set of complete trees with W [1, i] as the yield of the tree, given PCFG G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "A leftmost derivation D consists of a sequence of |D| steps. Let D i represent the i th step in the derivation D, and D[i, j] represent the subsequence of steps in D beginning with D i and ending with D j . Note that D |D| is the last step in the derivation, and D[1, |D|] is the derivation as a whole. Each step D i in the derivation is a rule in G, i.e., D i \u2208 P for all i. The probability of the derivation and the corresponding tree is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c1(D) = m i=1 \u03c1(D i )", "eq_num": "(1)" } ], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "Let D(G, W [1, i]) be the set of all possible leftmost derivations D (with respect to G) such that RHS(D |D| ) = W i . These are the set of partial leftmost derivations whose last step used a production with terminal W i on the right-hand side. The prefix probability of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "W [1, i] with respect to G is PrefixProb G (W [1, i]) = D\u2208D(G,W [1,i]) \u03c1(D) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "From this prefix probability, we can calculate the conditional probability of each word w \u2208 T in the terminal vocabulary, given the preceding sequence W [1, i] as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PG(w | W [1, i]) = PrefixProbG(W [1, i]w) P w \u2208T PrefixProbG(W [1, i]w ) = PrefixProbG(W [1, i]w) PrefixProbG(W [1, i])", "eq_num": "(3)" } ], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "This, in fact, is precisely the conditional probability that is used for language modeling for such applications as speech recognition and machine translation, which was the motivation for various syntactic language modeling approaches (Jelinek and Lafferty, 1991; Stolcke, 1995; Chelba and Jelinek, 1998; Roark, 2001) .", "cite_spans": [ { "start": 236, "end": 264, "text": "(Jelinek and Lafferty, 1991;", "ref_id": "BIBREF22" }, { "start": 265, "end": 279, "text": "Stolcke, 1995;", "ref_id": "BIBREF34" }, { "start": 280, "end": 305, "text": "Chelba and Jelinek, 1998;", "ref_id": "BIBREF7" }, { "start": 306, "end": 318, "text": "Roark, 2001)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "As with language modeling, it is important to model the end of the string as well, usually with an explicit end symbol, e.g., . For a string W [1, i], we can calculate its prefix probability as shown above. To calculate its complete probability, we must sum the probabilities over the set of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "complete trees T (G, W [1, i]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "In such a way, we can calculate the conditional probability of ending the string with given W [1, i] as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P G ( | W [1, i]) = D\u2208T (G,W [1,i]) \u03c1(D) PrefixProb G (W [1, i])", "eq_num": "(4)" } ], "section": "Notation and preliminaries", "sec_num": "2" }, { "text": "In this section, we review relevant details of the Roark (2001) incremental top-down parser, as configured for use here. As presented in Roark (2004) , the probabilities in the PCFG are smoothed so that the parser is guaranteed not to fail due to garden pathing, despite following a beam search strategy. Hence there is always a nonzero prefix probability as defined in Eq. 2. The parser follows a top-down leftmost derivation strategy. The grammar is factored so that every production has either a single terminal item on the right-hand side or is of the form A \u2192 B A-B, where A,B \u2208 V and the factored A-B category can expand to any sequence of children categories of A that can follow B. This factorization of nary productions continues to nullary factored productions, i.e., the end of the original production A \u2192 B 1 . . . B n is signaled with an empty produc-", "cite_spans": [ { "start": 51, "end": 63, "text": "Roark (2001)", "ref_id": "BIBREF30" }, { "start": 137, "end": 149, "text": "Roark (2004)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "tion A-B 1 -. . . -B n \u2192 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "The parser maintains a set of possible connected derivations, weighted via the PCFG. It uses a beam search, whereby the highest scoring derivations are worked on first, and derivations that fall outside of the beam are discarded. The reader is referred to Roark (2001; 2004) for specifics about the beam search.", "cite_spans": [ { "start": 256, "end": 268, "text": "Roark (2001;", "ref_id": "BIBREF30" }, { "start": 269, "end": 274, "text": "2004)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "The model conditions the probability of each production on features extracted from the partial tree, including non-local node labels such as parents, grandparents and siblings from the leftcontext, as well as c-commanding lexical items. Hence this is a lexicalized grammar, though the incremental nature precludes a general head-first strategy, rather one that looks to the left-context for c-commanding lexical items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "To avoid some of the early prediction of structure, the version of the Roark parser that we used performs an additional grammar transformation beyond the simple factorization already described -a selective left-corner transform of left-recursive productions (Johnson and Roark, 2000) . In the transformed structure, slash categories are used to avoid predicting left-recursive structure until some explicit indication of modification is present, e.g., a preposition.", "cite_spans": [ { "start": 258, "end": 283, "text": "(Johnson and Roark, 2000)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "The final step in parsing, following the last word in the string, is to \"complete\" all non-terminals in the yield of the tree. All of these open nonterminals are composite factored categories, such as S-NP-VP, which are \"completed\" by rewriting to . The probability of these productions is what allows for the calculation of the conditional probability of ending the string, shown in Eq. 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "One final note about the size of the non-terminal set and the intractability of exact inference for such a scenario. The non-terminal set not only includes the original atomic non-terminals of the grammar, but also any categories created by grammar factorization (S-NP) or the left-corner transform (NP/NP). Additionally, however, to remain context-free, the non-terminal set must include categories that incorporate non-local features used by the statistical model into their label, including parents, grandparents and sibling categories in the left-context, as well as c-commanding lexical heads. These non-local features must be made local by encoding them in the non-terminal labels, leading to a very large non-terminal set and intractable exact inference. Heavy smoothing is required when estimating the resulting PCFG. The benefit of such a non-terminal set is a rich model, which enables a more peaked statistical distribution around high quality syntactic structures and thus more effective pruning of the search space. The fully connected left-context produced by topdown derivation strategies provides very rich features for the stochastic parsing models. See Roark (2001; 2004) for discussion of these issues.", "cite_spans": [ { "start": 1171, "end": 1183, "text": "Roark (2001;", "ref_id": "BIBREF30" }, { "start": 1184, "end": 1189, "text": "2004)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "We now turn to measures that can be derived from the parser which may be of use for psycholinguistic modeling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "3 Parser and grammar derived measures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental top-down parsing", "sec_num": "2.1" }, { "text": "The surprisal at word W i is the negative log probability of W i given the preceding words. Using prefix probabilities, this can be calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S G (W i ) = \u2212 log PrefixProb G (W [1, i]) PrefixProb G (W [1, i \u2212 1])", "eq_num": "(5)" } ], "section": "Surprisal", "sec_num": "3.1" }, { "text": "Substituting equation 2 into this, we get", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S G (W i ) = \u2212 log D\u2208D(G,W [1,i]) \u03c1(D) D\u2208D(G,W [1,i\u22121]) \u03c1(D)", "eq_num": "(6)" } ], "section": "Surprisal", "sec_num": "3.1" }, { "text": "If we are using a beam-search parser, some of the derivations are pruned away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "Let B(G, W [1, i]) \u2286 D(G, W [1, i])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "be the set of derivations in the beam. Then the surprisal can be approximated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S G (W i ) \u2248 \u2212 log D\u2208B(G,W [1,i]) \u03c1(D) D\u2208B(G,W [1,i\u22121]) \u03c1(D)", "eq_num": "(7)" } ], "section": "Surprisal", "sec_num": "3.1" }, { "text": "Any pruning in the beam search will result in a deficient probability distribution, i.e., a distribution that sums to less than 1. Roark's thesis (2001) showed that the amount of probability mass lost for this particular approach is very low, hence this provides a very tight bound on the actual surprisal given the model.", "cite_spans": [ { "start": 131, "end": 152, "text": "Roark's thesis (2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Surprisal", "sec_num": "3.1" }, { "text": "High surprisal scores result when the prefix probability at word W i is low relative to the prefix probability at word W i\u22121 . Sometimes this is due to the identity of W i , i.e., it is a surprising word given the context. Other times, it may not be the lexical identity of the word so much as the syntactic structure that must be created to integrate the word into the derivations. One would like to tease surprisal apart into \"syntactic surprisal\" versus \"lexical surprisal\", which would capture this intuition of the lexical versus syntactic dimensions to the score. Our solution to this has the beneficial property of producing two scores whose sum equals the original surprisal score. The original surprisal score is calculated via sets of partial derivations at the point when each word W i is integrated into the syntactic structure, D(G, W [1, i]). We then calculate the ratio from point to point in sequence. To tease apart the lexical and syntactic surprisal, we will consider sets of partial derivations immediately before each word W i is integrated into the syntactic structure, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "D[1, |D|\u22121] for D \u2208 D(G, W [1, i]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "Recall that the last derivation move for every derivation in the set is from the POS-tag to the lexical item. Hence the sequence of derivation moves that excludes the last one includes all structure except the word W i . Then the syntactic surprisal is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SynSG(Wi) = \u2212 log P D\u2208D(G,W [1,i]) \u03c1(D[1, |D|\u22121]) P D\u2208D(G,W [1,i\u22121]) \u03c1(D)", "eq_num": "(8)" } ], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "and the lexical surprisal is calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "LexSG(Wi) = \u2212 log P D\u2208D(G,W [1,i]) \u03c1(D) P D\u2208D(G,W [1,i]) \u03c1(D[1, |D|\u22121])", "eq_num": "(9)" } ], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "Note that the numerator of SynS G (W i ) is the denominator of LexS G (W i ), hence they sum to form total surprisal S G (W i ). As with total surprisal, these measures can be defined either for the full set D (G, W [1, i] ) or for a pruned beam of deriva-", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 222, "text": "(G, W [1, i]", "ref_id": null } ], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "tions B(G, W [1, i]) \u2286 D(G, W [1, i]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "Finally, we replicated the Demberg and Keller (2008) \"unlexicalized\" surprisal by replacing every lexical item in the training corpus with its POS-tag, and then parsing the POS-tags of the language samples rather than the words. This differs from our syntactic surprisal by having no lexical conditioning events for rule probabilities, and by having no ambiguity about the POS-tag of the lexical items in the string. We will refer to the resulting surprisal measure as \"POS surprisal\" to distinguish it from our syntactic surprisal measure.", "cite_spans": [ { "start": 27, "end": 52, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical and Syntactic surprisal", "sec_num": "3.2" }, { "text": "Entropy scores of the sort advocated by Hale (2003; involve calculation over the set of complete derivations consistent with the set of partial derivations. Hale performs this calculation efficiently via matrix inversion, which explains the use of relatively small-scale grammars with tractably sized non-terminal sets. Such methods are not tractable for the kinds of richly conditioned, large-scale PCFGs that we advocate using here. At each word in the string, the Roark (2001) top-down parser provides access to the weighted set of partial analyses in the beam; the set of complete derivations consistent with these is not immediately accessible, hence additional work is required to calculate such measures.", "cite_spans": [ { "start": 40, "end": 51, "text": "Hale (2003;", "ref_id": "BIBREF18" }, { "start": 467, "end": 479, "text": "Roark (2001)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "Let H(D) be the entropy over a set of derivations D, calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(D) = \u2212 X D\u2208D \u03c1(D) P D \u2208D \u03c1(D ) log \u03c1(D) P D \u2208D \u03c1(D )", "eq_num": "(10)" } ], "section": "Entropy", "sec_num": "3.3" }, { "text": "If the set of derivations D = D(G, W [1, i]) is a set of partial derivations for string W [1, i], then H(D) is a measure of uncertainty over the partial derivations, i.e., the uncertainty regarding the correct analysis of what has already been processed. This can be calculated directly from the existing parser operations. If the set of derivations are the complete derivations consistent with the set of partial derivations -complete derivations that could occur over the set of possible continuations of the string -then this is a measure of the uncertainty about what is yet to come. We would like measures that can capture this distinction between (a) uncertainty of what has already been processed (\"current ambiguity\") versus (b) uncertainty of what is yet to be processed (\"predictive entropy\"). In addition, as with surprisal, we would like to tease apart the syntactic uncertainty versus lexical uncertainty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "To calculate the predictive entropy after word sequence W [1, i], we modify the parser as follows: the parser extends the set of partial derivations to include all possible next words (the entire vocabulary plus ), and calculates the entropy over that set. This measure is calculated from just one additional word beyond the current word, and hence is an approximation to Hale's conditional entropy of grammatical continuations, which is over complete derivations. We will denote this as H 1 G (W [1, i]) and calculate it as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "H 1 G (W [1, i]) = H( w\u2208T \u222a{} D(G, W [1, i]w)) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "This is performing a predictive step that the baseline parser does not perform, extending the parses to all possible next words. Unlike surprisal, entropy does not decompose straightforwardly into syntactic and lexical components that sum to the original composite measure. To tease apart entropy due to syntactic uncertainty versus that due to lexical uncertainty, we can define the set of derivations up to the preterminal (POS-tag) non-terminals as follows. Let S(D) = {D[1, |D|\u22121] : D \u2208 D}, i.e., the set of derivations achieved by removing the last step of all derivations in D. Then we can calculate a \"syntactic\" H 1 G as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "SynH 1 G (W [1, i]) = H( [ w\u2208T \u222a{} S(D(G, W [1, i]w))) (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "Finally, \"lexical\" H 1 G is defined in terms of the conditional probabilities derived from prefix probabilities as defined in Eq. 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "LexH 1 G (W [1, i]) = \u2212 X w\u2208T \u222a{} PG(w | W [1, i]) log PG(w | W [1, i])", "eq_num": "(13)" } ], "section": "Entropy", "sec_num": "3.3" }, { "text": "As a practical matter, these values are calculated within the Roark parser as follows. A \"dummy\" word is created that can be assigned every POStag, and the parser extends from the current state to this dummy word. (The beam threshold is greatly expanded to allow for many possible extensions.) Then every word in the vocabulary is substituted for the word, and the appropriate probabilities calculated over the beam. Finally, the actual next word is substituted, the beam threshold is reduced to the actual working threshold, and the requisite number of analyses are advanced to continue parsing the string. This represents a significant amount of additional work for the parser -particularly for vocabulary sizes that we currently use, on the order of tens of thousands of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "As with surprisal, we can calculate an \"unlexicalized\" version of the measure by training and parsing just to POS-tags. We will refer to this sort of entropy as \"POS entropy\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy", "sec_num": "3.3" }, { "text": "In order to test the psycholinguistic relevance of the different measures produced by the parser, we conducted a word by word reading experiment. 23 native speakers of English read 4 short texts (mean length: 883.5 words, 49.25 sentences). The texts were the written versions of narratives used in a parallel fMRI experiment making use of the same parser derived measures and whose results will be published in a different paper (Bachrach et al., 2009) . The narratives contained a high density of syntactically complex structures (in the form of sentential embeddings, relative clauses and other non-local dependencies) but were constructed so as to appear highly natural. The modified version of the Roark parser, trained on the Brown Corpus section of the Penn Treebank (Marcus et al., 1993) , was used to parse the different narratives and produce the word by word measures.", "cite_spans": [ { "start": 429, "end": 452, "text": "(Bachrach et al., 2009)", "ref_id": "BIBREF1" }, { "start": 773, "end": 794, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical validation 4.1 Subjects and stimuli", "sec_num": "4" }, { "text": "Each narrative was presented line by line (certain sentences required more than one line) on a computer screen (Dell Optiplex 755 running Windows XP Professional) using Linger 2.88 2 . Each line contained 11.5 words on average. Each word would appear in its relative position on the screen. The subject would then be required to push a keyboard button to advance to the next word. The original word would then disappear and the following word appear in the subsequent position on the screen. After certain sentences a comprehension question would appear on the screen (10 per narrative). This was done in order to encourage subjects to pay attention and to provide data for a post-hoc evaluation of comprehension. After each narrative, subjects were instructed to take a short break (2 minutes on average).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.2" }, { "text": "The log (base 10) of the reaction times were analyzed using a linear mixed effects regression analysis implemented in the language R (Bates et al., 2008) . Reaction times longer than 1500 ms and shorter than 150 ms (raw) were excluded from the analysis (4.8% of total data). Since button press latencies inferior to 150 ms must have been planned prior to the presentation of the word, we considered that they could not reflect stimulus driven effects. Data from the first and last words on each line were discarded.", "cite_spans": [ { "start": 133, "end": 153, "text": "(Bates et al., 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data analysis", "sec_num": "4.3" }, { "text": "The combined data from the 4 narratives was first modeled using a model which included order of word in the narrative 3 , word length, parserderived lexical surprisal, unigram frequency, bigram probability, syntactic surprisal, lexical entropy, syntactic entropy and mean number of parser derivation steps as numeric regressors. We also included the unlexicalized POS variants of syntactic surprisal and entropy, along the lines of Demberg and Keller (2008) , as detailed in \u00a7 3. Table 1 presents the correlations between these mean-centered measures.", "cite_spans": [ { "start": 432, "end": 457, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 480, "end": 487, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data analysis", "sec_num": "4.3" }, { "text": "In addition, we modeled word class (open/closed) as a categorical factor in order to assess interaction between class and the variables of interest, since such an interaction has been observed in the case of frequency (Bradley, 1983) . Finally, the random effect part of the model included intercepts for subjects, words and sentences. We report significant effects at the threshold p < .05.", "cite_spans": [ { "start": 218, "end": 233, "text": "(Bradley, 1983)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Data analysis", "sec_num": "4.3" }, { "text": "Given the presence of significant interactions between lexical class (open/closed) and a number of the variables of interests, we decided to split the data set into open and closed class words and model these separately (linear mixed effects with the same numeric variables as in the full model).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data analysis", "sec_num": "4.3" }, { "text": "In order to evaluate the usefulness of splitting total surprisal into lexical and syntactic components we compared, using a likelihood ratio test, a model where lexical and syntactic surprisal are modeled as distinct regressors to a model where a single regressor equal to their sum (total surprisal) was included. If the larger model provides a significantly better fit than the smaller model, this provides evidence that distinguishing between lexical and syntactic contributions to surprisal is relevant. Since total entropy is not a sum of syntactic and lexical entropy, an analogous test would not be valid in that case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data analysis", "sec_num": "4.3" }, { "text": "All subjects successfully answered the comprehension questions (92.8% correct responses, S.D.=5.1). In the full model, we observed significant main effects of word class as well as of lexical surprisal, bigram probability, unigram frequency, syntactic entropy, POS entropy and of order in the narrative. Syntactic surprisal, lexical entropy and number of steps had no significant effect. Word length also had no significant main effect but interacted significantly with word class (open/closed). Word class also interacted significantly with lexical surprisal, unigram frequency and syntactic surprisal. The presence of these interactions led us to construct models restricted to open and closed class items respectively. The estimated parameters are reported in Table 2. Reading time for open class words showed significant effects of unigram frequency, syntactic surprisal, syntactic entropy, POS entropy and order within the narrative. The positive effect of length approached significance. Reading time for closed class words exhibited significant effects of lexical surprisal, bigram probability, syntactic entropy and order in the narrative. Length had a non-significant negative effect, thus explaining the interaction observed in the full model.", "cite_spans": [], "ref_spans": [ { "start": 763, "end": 771, "text": "Table 2.", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "The models with separate lexical and syntactic surprisal performed better than models including combined surprisal. For open class words, the Akaike's information criterion (AIC) was -54810 for the combined model and -54819 for the independent model ( two, nested, models: \u03c7 2 (1)=10.7, p<.001). For closed class items, combined model's AIC was -61467 and full model's AIC was -61469 (likelihood ratio test: \u03c7 2 (1)=3.54, p=0.06).", "cite_spans": [ { "start": 250, "end": 251, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.4" }, { "text": "Our results demonstrate the relevance of modeling psycholinguistic processes using an incremental probabilistic parser, and the utility of the novel measures presented here. Of particular interest are: the significant effects of our syntactic entropy measure; the independent contributions of lexical surprisal, bigram probability and unigram frequency; and the differences between the predictions of the lexicalized parsing model and the unlexicalized (POS) parsing model. The effect of entropy, or uncertainty regarding the upcoming input independent of the surprise of that input, has been observed in non-linguistic tasks (Hyman, 1953; Bestmann et al., 2008) but to our knowledge has not been quantified before in the context of sentence processing. The usefulness of computational modeling is particularly evident in the case of entropy given the absence of any subjective procedure for its evaluation 4 . The results argue in favor of a predictive parsing architecture (Van Berkum et al., 2005) . The approach to entropy here differs from the one described in Hale (2006) in a couple of ways. First, as discussed above, the calculation procedure is different -we focus on extending the derivations with just one word, rather than to all possible complete derivations. Second, and most importantly, Hale emphasizes entropy reduction (or the gain in information, given an input, regarding the rest of the sentence) as the correlate of cognitive cost while here we are interested in the amount of entropy itself (and not the size of change). Interestingly, we observed only an effect of syntactic entropy, not lexical entropy. Recent ERP work has demonstrated that subjects do form specific lexical predictions in the context of sentence processing (Van Berkum et al., 2005; DeLong et al., 2005 ) and so we suspect that the absence of lexical entropy effect might be partly due to sparse data. Lexical surprisal and entropy were calculated using the internal state of a parser trained on the relatively small Brown corpus. Lexical entropy showed no significant effect while lexical surprisal affected only closed class words. This pattern of results might be due to the sparseness of the relevant information in such a small corpus (e.g., verb/object preferences) and the relevance of extra-textual dimensions (world knowledge, contextual information) to lexical-specific prediction. Closed class words are both more frequent (and hence better sampled) and are less sensitive to world knowledge, yet are often determined by the grammatical context. Demberg and Keller (2008) made use of the same parsing architecture used here to compute a syntactic surprisal measure, but used an unlexicalized parser (down to POS-tags rather than words) for this score. Their \"lexicalized\" surprisal is equivalent to our total surprisal (lexical surprisal + syntactic surprisal), while their POS surprisal is derived from a completely different model. In contrast, our approach achieves lexical and syntactic measures from the same model. In order to evaluate the difference between the two approaches we added unlexicalized POS surprisal calculated along the lines of that paper to our model, along with an unlexicalized POS entropy from the same model. We found no effect of unlexicalized POS surprisal 5 and a significant (but relatively small) effect of unlexicalized POS entropy. While syntactic surprisal was correlated with POS surprisal (see Table 1 ) and syntactic entropy correlated with POS entropy, the fact that our syntactic measures still had a significant effect suggests that lexical information contributes towards the formation of syntactic expectations.", "cite_spans": [ { "start": 626, "end": 639, "text": "(Hyman, 1953;", "ref_id": "BIBREF21" }, { "start": 640, "end": 662, "text": "Bestmann et al., 2008)", "ref_id": "BIBREF3" }, { "start": 975, "end": 1000, "text": "(Van Berkum et al., 2005)", "ref_id": "BIBREF36" }, { "start": 1066, "end": 1077, "text": "Hale (2006)", "ref_id": "BIBREF19" }, { "start": 1752, "end": 1777, "text": "(Van Berkum et al., 2005;", "ref_id": "BIBREF36" }, { "start": 1778, "end": 1797, "text": "DeLong et al., 2005", "ref_id": "BIBREF9" }, { "start": 2552, "end": 2577, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 3438, "end": 3445, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "While the effect of surprisal calculated by an incremental top down parser has been already demonstrated (Demberg and Keller, 2008) , our results argue for a distinction between the effect of lexical surprisal and that of syntactic surprisal without requiring unlexicalized parsing of the sort that Demberg and Keller advocate. It is important to keep in mind that this distinction between types of prediction (and as a consequence, prediction error) is not equivalent to the one drawn in the traditional cognitive science modularity debate, which has focused on the source of these predictions. We found a positive effect of syntactic surprisal in the case of open class words. The absence of an effect for closed class words remains to be explained.", "cite_spans": [ { "start": 105, "end": 131, "text": "(Demberg and Keller, 2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "We quantified word specific surprisal using 3 sources: the parser's internal state (lexical surprisal); probability given the preceding word (negative log bigram probability); and the unigram frequency of the word in a large corpus 6 . As can be observed in Table 1 , these three measures are highly correlated 7 . This is the consequence of the smoothing in the estimation procedure but also relates to a more general fact about language use: overall, more frequent words are also words more expected to appear in a specific context (Anderson and Schooler, 1991) . Despite these strong correlations, the three measures produced independent effects. Unigram frequency had a significant effect for open class words while bigram probability and lexical surprisal each had an effect on reading time of closed class items. Bigram probability has been often found to affect reading time using eye movement measures. This is the first study to demonstrate an additional effect of contextual surprisal given the preceding sentential context (lexical surprisal). Demberg and Keller found no effect for surprisal once bigram and unigram probabilities were included in the model but, importantly, they did not distinguish lexical and syntactic surprisal, rather \"lexicalized\" and \"unlexicalized\" surprisal.", "cite_spans": [ { "start": 534, "end": 563, "text": "(Anderson and Schooler, 1991)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.5" }, { "text": "We have presented novel methods for teasing apart syntactic and lexical surprisal from a fully lexicalized parser, as well as for extending the operation of a predictive parser to capture novel entropy measures that are also shown to be relevant to psycholinguistic modeling. Such automatic methods provide psycholinguistically relevant measures that are intractable to calculate by hand. The empirical validation presented here demonstrated that the new measures -particularly syntactic entropy and syntactic surprisal -have high utility for modeling human reading time data. Our approach to calculating syntactic surprisal, based on fully lexicalized parsing, provided significant effects, while the POS-tag based (unlexicalized) surprisal -of the sort used in Boston et al. (2008a) and Demberg and Keller (2008) -did not provide a significant effect in our trials. Further, we showed an effect of lexical surprisal for closed class words even when combined with unigram and bigram probabilities in the same model. This work contributes to the important, developing enterprise of leveraging data-driven NLP approaches to derive new measures of high utility for psycholinguistic and neuropsychological studies.", "cite_spans": [ { "start": 763, "end": 784, "text": "Boston et al. (2008a)", "ref_id": "BIBREF4" }, { "start": 789, "end": 814, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "5" }, { "text": "The parser version will be made publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://tedlab.mit.edu/\u223cdr/Linger/readme.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is a regressor to control for the trend of subjects to read faster later in the narrative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Cloze procedure(Taylor, 1953) is one way to derive probabilities that could be used to calculate entropy, though this procedure is usually conducted with lexical elicitation, which would make syntactic entropy calculations difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also ran the model including unlexicalized POS surprisal without our syntactic surprisal or syntactic entropy, and in this condition the unlexicalized POS surprisal measure had a nearly significant effect (t = 1.85), which is consistent with the results inBoston et al. (2008a) andDemberg and Keller (2008).6 The unigram frequencies came from the HAL corpus(Lund and Burgess, 1996). All other statistical models were estimated from the Brown Corpus.7 Unigram frequencies were represented as logs, the others as negative logs, hence the sign of the correlations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Thanks to Michael Collins, John Hale and Shravan Vasishth for valuable discussions about this work. This research was supported in part by NSF Grant #BCS-0826654. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Reflections of the environment in memory", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Anderson", "suffix": "" }, { "first": "L", "middle": [ "J" ], "last": "Schooler", "suffix": "" } ], "year": 1991, "venue": "Psychological Science", "volume": "2", "issue": "6", "pages": "396--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Anderson and L.J. Schooler. 1991. Reflections of the environment in memory. Psychological Science, 2(6):396-408.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Incremental prediction in naturalistic language processing: An fMRI study", "authors": [ { "first": "A", "middle": [], "last": "Bachrach", "suffix": "" }, { "first": "B", "middle": [], "last": "Roark", "suffix": "" }, { "first": "A", "middle": [], "last": "Marantz", "suffix": "" }, { "first": "S", "middle": [], "last": "Whitfield-Gabrieli", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardenas", "suffix": "" }, { "first": "J", "middle": [ "D E" ], "last": "Gabrieli", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bachrach, B. Roark, A. Marantz, S. Whitfield- Gabrieli, C. Cardenas, and J.D.E. Gabrieli. 2009. Incremental prediction in naturalistic language pro- cessing: An fMRI study. In preparation.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "lme4: Linear mixed-effects models using S4 classes. R package version 0", "authors": [ { "first": "D", "middle": [], "last": "Bates", "suffix": "" }, { "first": "M", "middle": [], "last": "Maechler", "suffix": "" }, { "first": "B", "middle": [], "last": "Dai", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "999375--999395", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Bates, M. Maechler, and B. Dai, 2008. lme4: Linear mixed-effects models using S4 classes. R package version 0.999375-20.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Influence of uncertainty and surprise on human corticospinal excitability during preparation for action", "authors": [ { "first": "S", "middle": [], "last": "Bestmann", "suffix": "" }, { "first": "L", "middle": [ "M" ], "last": "Harrison", "suffix": "" }, { "first": "F", "middle": [], "last": "Blankenburg", "suffix": "" }, { "first": "R", "middle": [ "B" ], "last": "Mars", "suffix": "" }, { "first": "P", "middle": [], "last": "Haggard", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Friston", "suffix": "" } ], "year": 2008, "venue": "Current Biology", "volume": "18", "issue": "", "pages": "775--780", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bestmann, L.M. Harrison, F. Blankenburg, R.B. Mars, P. Haggard, and K.J. Friston. 2008. Influence of uncertainty and surprise on human corticospinal excitability during preparation for action. Current Biology, 18:775-780.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam sentence corpus", "authors": [ { "first": "M", "middle": [], "last": "Ferrara Boston", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Hale", "suffix": "" }, { "first": "R", "middle": [], "last": "Kliegl", "suffix": "" }, { "first": "U", "middle": [], "last": "Patil", "suffix": "" }, { "first": "S", "middle": [], "last": "Vasishth", "suffix": "" } ], "year": 2008, "venue": "Journal of Eye Movement Research", "volume": "2", "issue": "1", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Ferrara Boston, J.T. Hale, R. Kliegl, U. Patil, and S. Vasishth. 2008a. Parsing costs as predictors of reading difficulty: An evaluation using the Pots- dam sentence corpus. Journal of Eye Movement Re- search, 2(1):1-12.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Surprising parser actions and reading difficulty", "authors": [ { "first": "M", "middle": [], "last": "Ferrara Boston", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Hale", "suffix": "" }, { "first": "R", "middle": [], "last": "Kliegl", "suffix": "" }, { "first": "S", "middle": [], "last": "Vasishth", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08:HLT, Short Papers", "volume": "", "issue": "", "pages": "5--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Ferrara Boston, J.T. Hale, R. Kliegl, and S. Va- sishth. 2008b. Surprising parser actions and read- ing difficulty. In Proceedings of ACL-08:HLT, Short Papers, pages 5-8.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Computational Distinctions of Vocabulary Type", "authors": [ { "first": "D", "middle": [ "C" ], "last": "Bradley", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.C. Bradley. 1983. Computational Distinctions of Vocabulary Type. Indiana University Linguistics Club, Bloomington.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploiting syntactic structure for language modeling", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ACL-COLING", "volume": "", "issue": "", "pages": "225--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba and F. Jelinek. 1998. Exploiting syntactic structure for language modeling. In Proceedings of ACL-COLING, pages 225-231.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" }, { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.J. Collins and B. Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL, pages 111-118.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Probabilistic word pre-activation during language comprehension inferred from electrical brain activity", "authors": [ { "first": "K", "middle": [ "A" ], "last": "Delong", "suffix": "" }, { "first": "T", "middle": [ "P" ], "last": "Urbach", "suffix": "" }, { "first": "M", "middle": [], "last": "Kutas", "suffix": "" } ], "year": 2005, "venue": "Nature Neuroscience", "volume": "8", "issue": "8", "pages": "1117--1121", "other_ids": {}, "num": null, "urls": [], "raw_text": "K.A. DeLong, T.P. Urbach, and M. Kutas. 2005. Prob- abilistic word pre-activation during language com- prehension inferred from electrical brain activity. Nature Neuroscience, 8(8):1117-1121.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Data from eyetracking corpora as evidence for theories of syntactic processing complexity", "authors": [ { "first": "V", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "F", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "109", "issue": "2", "pages": "193--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Demberg and F. Keller. 2008. Data from eye- tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An efficient context-free parsing algorithm", "authors": [ { "first": "J", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "Communications of the ACM", "volume": "6", "issue": "8", "pages": "451--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Earley. 1970. An efficient context-free parsing algo- rithm. Communications of the ACM, 6(8):451-455.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Syntactic complexity", "authors": [ { "first": "L", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 1985, "venue": "Natural Language Parsing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Frazier. 1985. Syntactic complexity. In D.R. Dowty, L. Karttunen, and A.M. Zwicky, editors, Natural Language Parsing. Cambridge University Press, Cambridge, UK.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A corpus-based approach for the prediction of language impairment in monolingual English and Spanish-English bilingual children", "authors": [ { "first": "K", "middle": [], "last": "Gabani", "suffix": "" }, { "first": "M", "middle": [], "last": "Sherman", "suffix": "" }, { "first": "T", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Y", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Gabani, M. Sherman, T. Solorio, and Y. Liu. 2009. A corpus-based approach for the prediction of language impairment in monolingual English and Spanish-English bilingual children. In Proceedings of NAACL-HLT.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The contributions of verb bias and plausibility to the comprehension of temporarily ambiguous sentences", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Garnsey", "suffix": "" }, { "first": "N", "middle": [ "J" ], "last": "Pearlmutter", "suffix": "" }, { "first": "E", "middle": [], "last": "Myers", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Lotocky", "suffix": "" } ], "year": 1997, "venue": "Journal of Memory and Language", "volume": "37", "issue": "1", "pages": "58--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "S.M. Garnsey, N.J. Pearlmutter, E. Myers, and M.A. Lotocky. 1997. The contributions of verb bias and plausibility to the comprehension of temporarily am- biguous sentences. Journal of Memory and Lan- guage, 37(1):58-93.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Linguistic complexity: locality of syntactic dependencies", "authors": [ { "first": "E", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1998, "venue": "Cognition", "volume": "68", "issue": "1", "pages": "1--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Gibson. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68(1):1-76.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The interaction of top-down and bottom-up statistics in the resolution of syntactic category ambiguity", "authors": [ { "first": "E", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 2006, "venue": "Journal of Memory and Language", "volume": "54", "issue": "3", "pages": "363--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Gibson. 2006. The interaction of top-down and bottom-up statistics in the resolution of syntactic category ambiguity. Journal of Memory and Lan- guage, 54(3):363-388.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A probabilistic Earley parser as a psycholinguistic model", "authors": [ { "first": "J", "middle": [ "T" ], "last": "Hale", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2nd meeting of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.T. Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the 2nd meeting of NAACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The information conveyed by words in sentences", "authors": [ { "first": "J", "middle": [ "T" ], "last": "Hale", "suffix": "" } ], "year": 2003, "venue": "Journal of Psycholinguistic Research", "volume": "32", "issue": "2", "pages": "101--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.T. Hale. 2003. The information conveyed by words in sentences. Journal of Psycholinguistic Research, 32(2):101-123.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Uncertainty about the rest of the sentence", "authors": [ { "first": "J", "middle": [ "T" ], "last": "Hale", "suffix": "" } ], "year": 2006, "venue": "Cognitive Science", "volume": "30", "issue": "4", "pages": "643--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.T. Hale. 2006. Uncertainty about the rest of the sen- tence. Cognitive Science, 30(4):643-672.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Inducing history representations for broad coverage statistical parsing", "authors": [ { "first": "J", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proceed- ings of HLT-NAACL, pages 24-31.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Stimulus information as a determinant of reaction time", "authors": [ { "first": "R", "middle": [], "last": "Hyman", "suffix": "" } ], "year": 1953, "venue": "Journal of Experimental Psychology: General", "volume": "45", "issue": "3", "pages": "188--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Hyman. 1953. Stimulus information as a determi- nant of reaction time. Journal of Experimental Psy- chology: General, 45(3):188-96.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Computation of the probability of initial substring generation by stochastic context-free grammars", "authors": [ { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1991, "venue": "Computational Linguistics", "volume": "17", "issue": "3", "pages": "315--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Jelinek and J. Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17(3):315-323.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Compact non-leftrecursive grammars using the selective left-corner transform and factoring", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2000, "venue": "Proceedings of COL-ING", "volume": "", "issue": "", "pages": "355--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson and B. Roark. 2000. Compact non-left- recursive grammars using the selective left-corner transform and factoring. In Proceedings of COL- ING, pages 355-361.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The relation between grammatical development and mean length of utterance in morphemes", "authors": [ { "first": "T", "middle": [], "last": "Klee", "suffix": "" }, { "first": "M", "middle": [ "D" ], "last": "Fitzgerald", "suffix": "" } ], "year": 1985, "venue": "Journal of Child Language", "volume": "12", "issue": "", "pages": "251--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Klee and M.D. Fitzgerald. 1985. The relation be- tween grammatical development and mean length of utterance in morphemes. Journal of Child Lan- guage, 12:251-269.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Expectation-based syntactic comprehension", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "106", "issue": "3", "pages": "1126--1177", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Levy. 2008. Expectation-based syntactic compre- hension. Cognition, 106(3):1126-1177.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Producing high-dimensional semantic spaces from lexical cooccurrence", "authors": [ { "first": "K", "middle": [], "last": "Lund", "suffix": "" }, { "first": "C", "middle": [], "last": "Burgess", "suffix": "" } ], "year": 1996, "venue": "Behavior Research Methods, Instruments, & Computers", "volume": "28", "issue": "", "pages": "203--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Lund and C. Burgess. 1996. Producing high-dimensional semantic spaces from lexical co- occurrence. Behavior Research Methods, Instru- ments, & Computers, 28:203-208.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguis- tics, 19(2):313-330.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Maltparser: A language-independent system for datadriven dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "S", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "S", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "E", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. K\u00fcbler, S. Marinov, and E. Marsi. 2007. Malt- parser: A language-independent system for data- driven dependency parsing. Natural Language En- gineering, 13(2):95-135.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Syntactic complexity measures for detecting mild cognitive impairment", "authors": [ { "first": "B", "middle": [], "last": "Roark", "suffix": "" }, { "first": "M", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "K", "middle": [], "last": "Hollingshead", "suffix": "" } ], "year": 2007, "venue": "Proceedings of BioNLP Workshop at ACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Roark, M. Mitchell, and K. Hollingshead. 2007. Syntactic complexity measures for detecting mild cognitive impairment. In Proceedings of BioNLP Workshop at ACL, pages 1-8.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249-276.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Robust garden path parsing", "authors": [ { "first": "B", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Natural Language Engineering", "volume": "10", "issue": "1", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Roark. 2004. Robust garden path parsing. Natural Language Engineering, 10(1):1-24.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic measurement of syntactic development in child langugage", "authors": [ { "first": "K", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "A", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "B", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "197--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Sagae, A. Lavie, and B. MacWhinney. 2005. Au- tomatic measurement of syntactic development in child langugage. In Proceedings of ACL, pages 197- 204.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Using language models to identify language impairment in Spanish-English bilingual children", "authors": [ { "first": "T", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Y", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of BioNLP Workshop at ACL", "volume": "", "issue": "", "pages": "116--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Solorio and Y. Liu. 2008. Using language models to identify language impairment in Spanish-English bilingual children. In Proceedings of BioNLP Work- shop at ACL, pages 116-117.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "An efficient probabilistic contextfree parsing algorithm that computes prefix probabilities", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "2", "pages": "165--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Stolcke. 1995. An efficient probabilistic context- free parsing algorithm that computes prefix proba- bilities. Computational Linguistics, 21(2):165-202.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Cloze procedure: A new tool for measuring readability", "authors": [ { "first": "W", "middle": [ "L" ], "last": "Taylor", "suffix": "" } ], "year": 1953, "venue": "Journalism Quarterly", "volume": "30", "issue": "", "pages": "415--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.L. Taylor. 1953. Cloze procedure: A new tool for measuring readability. Journalism Quarterly, 30:415-433.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Anticipating upcoming words in discourse: Evidence from ERPs and reading times", "authors": [ { "first": "J", "middle": [ "J A" ], "last": "Van Berkum", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Brown", "suffix": "" }, { "first": "P", "middle": [], "last": "Zwitserlood", "suffix": "" }, { "first": "V", "middle": [], "last": "Kooijman", "suffix": "" }, { "first": "P", "middle": [], "last": "Hagoort", "suffix": "" } ], "year": 2005, "venue": "Learning and Memory", "volume": "31", "issue": "3", "pages": "443--467", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.J.A. Van Berkum, C.M. Brown, P. Zwitserlood, V.Kooijman, and P. Hagoort. 2005. Anticipat- ing upcoming words in discourse: Evidence from ERPs and reading times. Learning and Memory, 31(3):443-467.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A model and an hypothesis for language structure", "authors": [ { "first": "V", "middle": [ "H" ], "last": "Yngve", "suffix": "" } ], "year": 1960, "venue": "Proceedings of the American Philosophical Society", "volume": "104", "issue": "", "pages": "444--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "V.H. Yngve. 1960. A model and an hypothesis for lan- guage structure. Proceedings of the American Philo- sophical Society, 104:444-466.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "text": "1 . . . W n and |W | = n. Let W [i, j] denote the substring beginning at word W i and ending at word W j of the string. Then W |W | is the last word in the string, and W [1, |W |] is the string as a whole. Adjacent strings represent concatenation, i.e., W [1, i]W [i+1, j] = W [1, j]. Thus W [1, i]w represents the string where W i+1 = w.", "type_str": "table", "num": null, "content": "" }, "TABREF2": { "html": null, "text": "Correlations between (mean-centered) predictors. Note that unigram frequencies were represented as logs, other scores as negative logs, hence the sign of the correlations.", "type_str": "table", "num": null, "content": "
" }, "TABREF4": { "html": null, "text": "Estimated effects from mixed effects models on open and closed items (stars denote significance at p<.05)", "type_str": "table", "num": null, "content": "
" } } } }