Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
109 kB
{
"paper_id": "I11-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:17.441378Z"
},
"title": "Grammar Induction from Text Using Small Syntactic Prototypes",
"authors": [
{
"first": "Prachya",
"middle": [],
"last": "Boonkwan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street Edinburgh",
"postCode": "EH8 9AB",
"country": "UK"
}
},
"email": "p.boonkwan@sms.ed.ac.uk"
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street Edinburgh",
"postCode": "EH8 9AB",
"country": "UK"
}
},
"email": "steedman@inf.ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an efficient technique to incorporate a small number of cross-linguistic parameter settings defining default word orders to otherwise unsupervised grammar induction. A syntactic prototype, represented by the integrated model between Categorial Grammar and dependency structure, generated from the language parameters, is used to prune the search space. We also propose heuristics which prefer less complex syntactic categories to more complex ones in parse decoding. The system reduces errors generated by the state-of-the-art baselines for WSJ10 (1% error reduction of F1 score for the model trained on Sections 2-22 and tested on Section 23), Chinese10 (26% error reduction of F1), German10 (9% error reduction of F1), and Japanese10 (8% error reduction of F1), and is not significantly different from the baseline for Czech10.",
"pdf_parse": {
"paper_id": "I11-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an efficient technique to incorporate a small number of cross-linguistic parameter settings defining default word orders to otherwise unsupervised grammar induction. A syntactic prototype, represented by the integrated model between Categorial Grammar and dependency structure, generated from the language parameters, is used to prune the search space. We also propose heuristics which prefer less complex syntactic categories to more complex ones in parse decoding. The system reduces errors generated by the state-of-the-art baselines for WSJ10 (1% error reduction of F1 score for the model trained on Sections 2-22 and tested on Section 23), Chinese10 (26% error reduction of F1), German10 (9% error reduction of F1), and Japanese10 (8% error reduction of F1), and is not significantly different from the baseline for Czech10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised grammar induction has gained general interest for several decades, offering the possibility of building practical syntactic parsers by reducing the labor of constructing a treebank from scratch. One approach is to exploit the Inside/Outside Algorithm (Baker, 1979; Carroll and Charniak, 1992) , a variation of EM algorithm for PCFG, to estimate the parameters of the parser's language models. More recent advances in this approach are the constituent-context model (CCM) (Klein and Manning, 2001 ; Klein and Manning, 2002) , dependency model with valence (DMV) based on Collin's head dependency model (1999) , and the CCM+DMV mixture (Klein and Manning, 2004; Klein, 2005) . Several search techniques and models have been added to CCM+DMV for dealing with local optima and data sparsity (Smith, 2006; Cohen et al., 2008; Headden III et al., 2009) . Spitkovsky et al. (2010) proposed a training strategy where the model fully trained on shorter sentences and roughly trained on longer sentences tend to outperform the model fully trained on the entire dataset. Recently, Gillenwater et al. (2010) proposed the use of posterior regularization in EM in which the posterior distribution of parent-child POS tags are regulated to an expected distribution.",
"cite_spans": [
{
"start": 264,
"end": 277,
"text": "(Baker, 1979;",
"ref_id": "BIBREF3"
},
{
"start": 278,
"end": 305,
"text": "Carroll and Charniak, 1992)",
"ref_id": "BIBREF8"
},
{
"start": 484,
"end": 508,
"text": "(Klein and Manning, 2001",
"ref_id": "BIBREF22"
},
{
"start": 511,
"end": 535,
"text": "Klein and Manning, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 614,
"end": 620,
"text": "(1999)",
"ref_id": null
},
{
"start": 647,
"end": 672,
"text": "(Klein and Manning, 2004;",
"ref_id": null
},
{
"start": 673,
"end": 685,
"text": "Klein, 2005)",
"ref_id": "BIBREF26"
},
{
"start": 800,
"end": 813,
"text": "(Smith, 2006;",
"ref_id": "BIBREF34"
},
{
"start": 814,
"end": 833,
"text": "Cohen et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 834,
"end": 859,
"text": "Headden III et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 862,
"end": 886,
"text": "Spitkovsky et al. (2010)",
"ref_id": "BIBREF36"
},
{
"start": 1083,
"end": 1108,
"text": "Gillenwater et al. (2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, purely unsupervised learning still does not perform well because the parameter estimation can be misled by unexpected frequent cooccurrence. A common example of it is the collocation of a verb (VBZ) and a determiner (DT) in a verb phrase. This collocation results in incorrect trees such as ((VBZ DT) NN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To avoid this problem, the use of syntactic prototypes has been proposed. Instead of enumerating every possibility, syntactic structures are cautiously constructed regarding some syntactic constraints. Haghighi and Klein (2006) proposed the use of bracketing rules extracted from WSJ10 in CCM and considerably improved accuracy. Druck et al. (2009) used dependency formation rules handcrafted by linguists to improve the accuracy of DMV. Snyder et al. (2009) do semisupervised grammar induction from bilingual text with the help of a supervised parser on one side and word alignment. However, bilingual corpora are not available for many language pairs. Naseem et al. (2010) proposed the use of crosslinguistic knowledge represented as a set of allowable head-dependent pairs. However, this method still requires provision of language-specific rules to boost accuracy. If language-specific rules are necessary to achieve accuracy, we need more efficient ways to encode this knowledge.",
"cite_spans": [
{
"start": 202,
"end": 227,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF15"
},
{
"start": 329,
"end": 348,
"text": "Druck et al. (2009)",
"ref_id": "BIBREF11"
},
{
"start": 438,
"end": 458,
"text": "Snyder et al. (2009)",
"ref_id": "BIBREF35"
},
{
"start": 654,
"end": 674,
"text": "Naseem et al. (2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes a method for inducing language-specific word order regularities captur-ing cross-linguistically frequent constructions to constrain unsupervised grammar induction. We use the notion of syntactic prototype, a set of grammar rules automatically generated for such constructions. Categorial Dependency Grammar (CDG), which combines rules of constituency and dependency, is used to represent syntactic prototypes. We also propose a novel category penalty score for use in decoding, which defines the most probable parse according to a preference for less complex categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. \u00a72 details the method of encoding linguistic prior knowledge as a syntactic prototype. \u00a73 explains an overview of our approach. \u00a74 shows experiment results and discusses the errors. We conclude in \u00a75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A syntactic prototype is a set of grammar rules representing default language parameters (such as word order for the most cross-linguistically frequent linguistic constructions, following Naseem et al. (2010) 's notion of cross-linguistic knowledge. This section shows how CDG rules are derived from a set of word order constraints.",
"cite_spans": [
{
"start": 188,
"end": 208,
"text": "Naseem et al. (2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Prototypes",
"sec_num": "2"
},
{
"text": "Categorial Dependency Grammar (CDG) is an extension of pure Categorial Grammar (CG) (Ajdukiewicz, 1935; Bar-Hillel, 1953) used for defining language-specific prototypes to be discovered. Its syntactic derivations define constituency and dependency in parallel. In CG, each constituent is assigned one or more syntactic categories, defined as an atomic category or a function category. For example, the proper name 'John' is assigned the atomic category np. If X and Y are categories of either kind, then X/Y and X\\Y are function categories that map constituents of type Y respectively on the right and on the left into those of type X. For example, the intransitive verb 'walks' is assigned the function S \\NP .",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "(Ajdukiewicz, 1935;",
"ref_id": "BIBREF0"
},
{
"start": 104,
"end": 121,
"text": "Bar-Hillel, 1953)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "We extended CG construct dependency structure alongside the syntactic derivation by encoding the direction of dependency in slashes. Using the head-outward notation for dependency of (Collins, 1999) , the slash is subscripted < (>) if the corresponding dependency is to be linked from the head on the right to its dependent on the left (the head on the left to its dependent on the right). For example, an English adjective (e.g. 'big') can be assigned the category np/ > np, while a transitive verb can be assigned s\\ > np/ < np. CDG differs from PF-CCG (Koller and Kuhlmann, 2009) in that dependency direction is specified independently from the order of function and argument, while theirs is determined by slash directionality. For them such an adjective has the implicit category np/ > np and acts as the head of the noun phrase. The derivation rules for context-free CDG are listed below:",
"cite_spans": [
{
"start": 183,
"end": 198,
"text": "(Collins, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 555,
"end": 582,
"text": "(Koller and Kuhlmann, 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "X/<Y : d1 Y : d2 \u21d2 X : h(d1) \u2192 h(d2) (1) X/>Y : d1 Y : d2 \u21d2 X : h(d1) \u2190 h(d2) Y : d1 X\\<Y : d2 \u21d2 X : h(d1) \u2192 h(d2) Y : d1 X\\>Y : d2 \u21d2 X : h(d1) \u2190 h(d2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "where the notations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "h(d 1 ) \u2192 h(d 2 ) and h(d 1 ) \u2190 h(d 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "mean a dependency linking from the head of the dependency structure d 1 to the head of d 2 , and that linking from the head of d 2 to the head of d 1 , respectively. Let us denote a constituent type with C : w, where C is a syntactic category and w is the head word of the constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "Given the CDG in (2), we obtain the syntactic derivation of the string 'John eats delicious sandwiches' in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "John, sandwiches np (2) delicious np/ > np eats s\\ > np/ < np",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "Figure 1(a) shows dependency-driven derivation, in which the heads of constituents are propagated. Figure 1 (b) reflects the formation of the dependency structure corresponding to the dependencydriven derivation. The attraction of using CDG for grammar induction is the integration of the constituent model and the dependency model. As shown in figure 1, the syntactic derivation defines the dependency structure, because we can directly construct a dependency structure from any head-driven syntactic derivations using the annotated directions. CDG can boost the accuracy of grammar induction by modeling rules of both constituent formation and dependency. However, the search space would become impossibly large if we had to enumerate all possible syntactic categories, including all possible arguments and dependency direction. This danger can be avoided by using small amounts of hand-crafted prior linguistic knowledge. Figure 1 : Syntactic derivation of 'John ate delicious sandwiches' based on CDG. Each constituent type is denoted by C : w, where C is a syntactic category and w is the head word, such as s\\ > np/ < np : eats.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 1",
"ref_id": null
},
{
"start": 925,
"end": 933,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "However, a simple parametric syntactic prototype will give rise to parsing failures when faced with parametrically exceptional items, which occur in most if not all languages. We allow for such exceptions to be accommodated by defining an additional wildcard category which combines with any syntactic category to yield the wildcard itself, according to the following additional combinatory rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ": d1 X : d2 \u21d2 { : h(d1) \u2190 h(d2),",
"eq_num": "(3)"
}
],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "h(d1) \u2192 h(d2)} X : d1 : d2 \u21d2 { : h(d1) \u2190 h(d2), : h(d1) \u2192 h(d2)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "The wildcard is assigned to unknown words and large irreducible constituents so as to allow complete parses of otherwise unparsable sentences. As shown in (3), each wildcard derivation generates two possible dependency structures; i.e. d 1 and d 2 can be the head of a phrase. The wildcard will be revisited in \u00a73.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categorial Dependency Grammar",
"sec_num": "2.1"
},
{
"text": "We generate the CDG for each language automatically from language parameters. To facilitate this process, we have devised a questionnaire consisting of 30 questions concerning word orders for constructions that occur in most languages. Sorted by their importance, the questions can be grouped into the following categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "1. The orders of subject, verb, direct object, and optional indirect object (1 question)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "2. The argument orders of subject-and objectcontrol verbs (2 questions)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "3. The orders of adjectives, adverbs, and auxiliary verbs (4 questions) 4. The use of cardinal numbers and noun classifiers (2 questions)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "5. The argument orders of adpositions, nominal modifiers, adverbials, possessive markers, relative pronouns, and subordinate conjunctions. 7 questions6. The orders of gerunds, infinitive markers, nominalizers, and sentential modifiers (6 questions)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "7. The orders of particles, the existence of a copula, the usages of gerunds, the order of negative markers, the use of dative shifts, and the omission of discourse-given subject and object (8 questions)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "These language parameters are used to automatically generate a CDG representing a syntactic prototype including language-specific types of cross-linguistically frequent categories 1 will be generated. For example, if a language has the word order SVIO, the syntactic category s\\ > np, s\\ > np/ < np, and s\\ > np/ < np/ < np are generated and assigned by default to intransitive, transitive, and ditransitive verbs, respectively. Each slash in all syntactic categories is assigned with dependency directions according to Collins (1999) 's head percolation heuristics. All questions are optional; i.e. if any of the questions are left blank, all possible categories for that question will be generated.",
"cite_spans": [
{
"start": 520,
"end": 534,
"text": "Collins (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "Once the cross-linguistic category classes are generated, we then map them to the POS tags in a particular corpus. This part is an engineering task where the mapping should be best fitted to the corpus. However, we will show in the experiment section that the preparation process for syntactic prototypes is quantifiable and reasonable in comparison to the improvement in accuracy attained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "3 Grammar Induction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Parameterization",
"sec_num": "2.2"
},
{
"text": "The first step in grammar induction is to enumerate all possible parses for each sentence. We use a table mapping from POS tags to language-specific categories to define the lexicon, and build a parse chart for each sentence with CKY Algorithm. A packed chart is used for both speed and space compactness. We apply a right-branching preference to eliminate spurious ambiguity caused by coordination and nominal compounding. In the event of a sentence yielding no parse using that lexicon, we assign the wildcard category ' ' to all unknown words and maximal irreducible constituents, and reparse the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Enumeration",
"sec_num": "3.1"
},
{
"text": "We extend the probabilistic context-free grammar with role-emission probabilities, defined as the product of the probability of each daughter category performing as a head or a dependent in a derivation. This model was motivated by Collins (1999) 's head-outward dependency model and Hockenmaier (2003)'s generative model for parsing CCG. Given a CDG G, we define the probability of a tree t having the constituent type C : w by:",
"cite_spans": [
{
"start": 232,
"end": 246,
"text": "Collins (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "P (t|s, G) = 1 Z C:w\u2192\u03b1 \u2208R(t)\u2212L(t) \u03c0exp(\u03b1|C : w, G) \u00d7\u03c0head(H : w|G) \u00d7\u03c0dep(D : w |G) (4) \u00d7 C:w\u2208N (t) \u03c0HE(w|C, G) # t (C:w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "where Z is a normalization constant and each production \u03b1 contains H : w and D : w , and H : w and D : w are the head and the dependent, respectively. There are four types of parameters as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "1. \u03c0 exp (\u03b1|C : w, G): probability of the type C : w generating a production \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "2. \u03c0 head (C : w|G): probability of the type C : w performing as a head.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0head(C : w|G) = C:w #(C : w \u2192 \u03b1 C:w head ) C :w #(C : w \u2192 \u03b1 C:w )",
"eq_num": "(5)"
}
],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "3. \u03c0 dep (C : w|G): probability of the type C : w performing as a dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0dep(C : w|G) = C :w #(C : w \u2192 \u03b1 C:w dep ) C :w #(C : w \u2192 \u03b1 C:w )",
"eq_num": "(6)"
}
],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "4. \u03c0 HE (w|C, G): probability of a category C generating the head w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0HE(w|C, G) = t\u2208Q #t(C : w) t\u2208Q w #t(C : w )",
"eq_num": "(7)"
}
],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "where \u03b1 C:w is a production that contains C : w, and \u03b1 C:w head and \u03b1 C:w dep have C : w as the head and the dependent, respectively. # t (C : w) is the frequency count of the category C : w in the tree t. N (t) is the set of all nonterminal nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Model",
"sec_num": "3.2"
},
{
"text": "Learning is achieved using the Variational Bayesian EM Algorithm (VB-EM) (Attias, 2000; Ghahramani and Beal, 2000) to estimate the parameters \u03c0 exp , \u03c0 head , \u03c0 dep , and \u03c0 HE . We followed the approach of Kurihara and Sato (2006) for training PCFGs with the VB-EM. This approach places Dirichlet priors over the multinomial grammar rule distributions. We set the Dirichlet hyperparameters to 1.0 for all rules containing the wildcard category and 5.0 for all others. In all other regards, we followed Kurihara and Sato (2006) . The VB-EM algorithm iterates two processes of expectation calculation and parameter maximization. It is favored for the present purpose because it is less data-overfitting than the standard Inside/Outside Algorithm regarding its free-energy criteria for model selection. We calculate expected counts using Dynamic Programming (Baker, 1979; Lari and Young, 1990) .",
"cite_spans": [
{
"start": 73,
"end": 87,
"text": "(Attias, 2000;",
"ref_id": "BIBREF1"
},
{
"start": 88,
"end": 114,
"text": "Ghahramani and Beal, 2000)",
"ref_id": "BIBREF13"
},
{
"start": 206,
"end": 230,
"text": "Kurihara and Sato (2006)",
"ref_id": "BIBREF28"
},
{
"start": 502,
"end": 526,
"text": "Kurihara and Sato (2006)",
"ref_id": "BIBREF28"
},
{
"start": 855,
"end": 868,
"text": "(Baker, 1979;",
"ref_id": "BIBREF3"
},
{
"start": 869,
"end": 890,
"text": "Lari and Young, 1990)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3.3"
},
{
"text": "To further avoid the data over-fitting issue, we smoothed the probability of each substructure with the additive smoothing technique (Lidstone, 1920; Johnson, 1932; Jeffreys, 1948) . An approximated parameter\u03c0(\u03c4 ) is calculated b\u0177",
"cite_spans": [
{
"start": 133,
"end": 149,
"text": "(Lidstone, 1920;",
"ref_id": "BIBREF30"
},
{
"start": 150,
"end": 164,
"text": "Johnson, 1932;",
"ref_id": "BIBREF19"
},
{
"start": 165,
"end": 180,
"text": "Jeffreys, 1948)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3.3"
},
{
"text": "\u03c0(\u03c4 ) = \u03c0(\u03c4 ) + 1 + (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3.3"
},
{
"text": "where is a small constant value. In our experiments, we chose = 10 \u221225 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3.3"
},
{
"text": "By using prototypical syntactic categories in derivation, the system can be misled by complex categories. A parse containing more complex categories is preferred to one containing less complex categories, because both simple and complex syntactic categories have the same chance of occurrence in parse enumeration. When learning the parameters, rules containing complex categories tend to have relatively excessive probability, as opposed to the Zipfian distribution of syntactic categories in which less complex categories are more frequently found in CCGbank. We therefore introduce a category penalty score. The category penalty score is motivated by the observation that, in practical use of language, simpler categories tend to be used more frequently than the more complex ones. The penalty score v(c) of the category c is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding with Category Penalty",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v(c) = k S(c)",
"eq_num": "(9)"
}
],
"section": "Decoding with Category Penalty",
"sec_num": "3.4"
},
{
"text": "where S(c) is the count of all forward and backward slashes in c and k is the penalty constant. We weight each tree by the product of the penalty scores of the syntactic category on each node; i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding with Category Penalty",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (t|s, G) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 v(A) \u2022 \u03c0(A \u2192 w) lexicon v(A) \u2022 \u03c0(A \u2192 \u03b1) branching \u2022 |\u03b1| i=1 P (ti|s, G)",
"eq_num": "(10)"
}
],
"section": "Decoding with Category Penalty",
"sec_num": "3.4"
},
{
"text": "We use Viterbi decoding to find the most probable parse from a packed chart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding with Category Penalty",
"sec_num": "3.4"
},
{
"text": "In order to evaluate the method in comparison to the state of the art, we chose WSJ10, the standard collection of trees from WSJ part of PTB (Marcus et al., 1993) whose sentence length does not exceed ten words after taking out punctuation marks and empty elements. In stead of surface forms, we used a set of POS sequences taken from all WSJ10's trees to avoid the data sparsity issue. We converted the Penn Treebank into dependency structures with Collins (1999) 's head percolation heuristics. Following the literature, we trained the system with sections 2-22 and evaluated the resultant model on section 23.",
"cite_spans": [
{
"start": 141,
"end": 162,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF31"
},
{
"start": 450,
"end": 464,
"text": "Collins (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Accuracy Metrics",
"sec_num": "4.1"
},
{
"text": "For multilingual experiments, we made use of dependency corpora from the 2006 CoNLL X Shared Task (Buchholz and Marsi, 2006) . Shown in Table 1 , Chinese (Keh-Liann and Hsieh, 2004), Czech (Bohomov\u00e0 et al., 2001 ), German (Brants et al., 2002) , and Japanese (Kawata and Bartels, 2000) were chosen for the sake of language typology variety. We also chose sentences whose length does not exceed ten words after taking out punctuation marks. As a free-word-ordered, inflectional language, the Czech dataset was particularly prepared by augmenting the POS tags with inflectional information, resulting in significantly much more granularity. However, Czech's syntactic prototype does not make use of such syntactic information to restrain the search space. We measured the capability of our system by two metrics: directed dependency accuracy, and undirected dependency accuracy (Klein, 2005) . For directed dependency accuracy, we count a directed dependency of a word pair to be correct if it exists in the gold standard. For undirected dependency accuracy, we neglect the direction of the dependency. All accuracy numbers are reported in terms of precision, recall, and F1 scores.",
"cite_spans": [
{
"start": 98,
"end": 124,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 189,
"end": 211,
"text": "(Bohomov\u00e0 et al., 2001",
"ref_id": "BIBREF5"
},
{
"start": 222,
"end": 243,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF6"
},
{
"start": 259,
"end": 285,
"text": "(Kawata and Bartels, 2000)",
"ref_id": "BIBREF20"
},
{
"start": 876,
"end": 889,
"text": "(Klein, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets and Accuracy Metrics",
"sec_num": "4.1"
},
{
"text": "In order to construct syntactic prototypes for each language, we conduct an interview with a nonlinguist native speaker. We ask him/her each question in the questionnaire mentioned in \u00a72.2 by giving them a sample sentence and letting them build up the corresponding sentence in their language. We then ask questions to elicit word alignment and analyze the answers. Normally it takes up to two hours per previously unseen language to complete the questionnaire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of Syntactic Prototypes",
"sec_num": "4.2"
},
{
"text": "We then study the manual of the treebank's POS tags and mapped each to one or more languagespecific category classes. Normally this process takes around four to six hours to thoroughly scrutinize the usage of each POS tag and assign them to appropriate classes. It therefore takes six to ten hours to build a syntactic prototype for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construction of Syntactic Prototypes",
"sec_num": "4.2"
},
{
"text": "This section presents results of English and multilingual experiments using syntactic prototypes as a guide to grammar induction. Table 2 shows experiment results of English grammar induction on WSJ10. In the beginning, we compared the produced trees against the gold standard produced from Collins's parser. We trained the system with Sections 2-22 of WSJ10 and tested it on Section 23. In decoding, we set the category penalty constant to 10 \u221215 . The F1 score outperforms the baseline set by (Naseem et al., 2010) . To exhibit the stability of the approach, we also ran ten-fold cross validation on English; i.e. we divided WSJ10 into ten parts and, for each fold, we chose nine parts for as a training set and the other one as a test set. We attained higher F1 score, as expected for cross-validation, which effectively tests on the development set.",
"cite_spans": [
{
"start": 495,
"end": 516,
"text": "(Naseem et al., 2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Category Penalties Figure 2 shows the effects of numbers of constraints towards directed and undirected F1 scores. We varied the number of constraints in English syntactic prototypes. The category class 4 (the usages of cardinal numbers and noun classifiers) was neglected, because we can treat cardinal numbers as adjectives or nouns and there are no true noun classifiers in English. Therefore there are 28 constraints in total for English. We again trained on Sections 2-22 of WSJ10 and tested on Section 23. In decoding, we set the category penalty constant to 10 \u221215 . We then evaluated the accuracy against the gold standard produced by Collins's parser.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Numbers of Constraints and",
"sec_num": "4.3.2"
},
{
"text": "As we increased the number of constraints in syntactic prototypes, we found that both directed and undirected F1 scores increase and start to saturate after the first 20 rules. In keeping with the Zipfian distribution, the first 20 rules cover frequent linguistic phenomena. When we pruned the search space with linguistic constraints, the directed accuracy starts to approach the undirected accuracy. We also note that errors generated by the system reflect the same attachment ambiguity errors as supervised parsing. Figure 3 shows the effects of category penalty constants on accuracy. We again trained on Sections 2-22 of WSJ10 and tested on Section 23. We then evaluated the accuracy against the gold stan- Figure 3 : Effects of category penalty on the directed and undirected dependency accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 527,
"text": "Figure 3",
"ref_id": null
},
{
"start": 712,
"end": 720,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Numbers of Constraints and",
"sec_num": "4.3.2"
},
{
"text": "dard produced by Collins's parser. We notice that both accuracy scores saturate at the penalty constant of 10 \u221215 and slightly decay afterwards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effects of Numbers of Constraints and",
"sec_num": "4.3.2"
},
{
"text": "We also conducted multilingual experiments on Chinese, Czech, German, and Japanese to show the stability of the approach. We ran ten-fold cross validation on each language and calculated the average F1 scores. Our baseline systems are as follows: (Naseem et al., 2010) for English, (Snyder et al., 2009) for Chinese, and (Gillenwater et al., 2010) for Czech, German, and Japanese. In Table 3, our system significantly outperforms almost all the baselines, except in the Czech experiment. We believe that the under-performance of our system on Czech is caused by the data sparsity issue. Designed based on rather fixed word ordered languages, the syntactic prototype needs to generate almost all possible syntactic categories to capture Czech's free word orderedness. Although its POS ",
"cite_spans": [
{
"start": 247,
"end": 268,
"text": "(Naseem et al., 2010)",
"ref_id": "BIBREF32"
},
{
"start": 282,
"end": 303,
"text": "(Snyder et al., 2009)",
"ref_id": "BIBREF35"
},
{
"start": 321,
"end": 347,
"text": "(Gillenwater et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Experiments",
"sec_num": "4.3.3"
},
{
"text": "We counted erroneous dependency pairs generated in the English experiment (10X) in \u00a74.3.1, and we classified errors into two types: over-generation and under-generation. From Table 4 , we can notice that the majority of errors are caused by adverbial and prepositional attachment (e.g. RB > VB, CD < IN, and NN < IN), and NP structural ambiguity (e.g. NN > NNP, DT > NN, and NNP > NNP). 2 These errors are common in supervised parsing. There is also under-generation of adverbial preposition phrases. We believe that the category penalty score accounts for this issue, resulting in the NP-modifying preposition (such as np\\ < np/ > np) being preferred to the adverbial one (such as s\\ > np\\ < (s\\ > np)/ < np).",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "We have demonstrated an efficient approach to grammar induction using linguistic prior knowledge encoded as a prototype or lexical schema. This prior knowledge was used to capture frequent linguistic phenomena. To integrate the strength of constituent and dependency models, Categorial Dependency Grammar was used as the backbone formalism. We also proposed a category penalty score preferring less complex categories, based on the observation of the Zipfian distribution of category types in CCGbank. Syntactic prototypes can capture the most frequent constructions and improve accuracy on almost all of the selected languages. We found that dependency accuracy correlates with the Zipfian distribution as the number of constraints increases, as the increase in accuracy saturates after the first 20 rules. Error analysis suggests that the main sources of error are in adverbial and prepositional attachment, and NP structural ambiguity, which are also problematic for supervised parsing. Future work remains as follows. First, we are looking forward to improving the capability of our syntactic prototype to also handle free word ordered languages by generating syntactic categories with more flexible combination and restraining the search space with inflectional information. Second, we plan to experiment on grammar induction from untagged words by decomposing the model into tagging and parsing subproblems (Ganchev et al., 2009; Rush et al., 2010; Auli and Lopez, 2011) . Third and finally, we will experiment on longer sentences to show the scalability of our approach in dealing with larger data. Table 3 : Undirected and directed dependency accuracy of grammar induction for English, Chinese, Czech, German, and Japanese. Our baseline systems are as follows: \u2020 (Naseem et al., 2010) for English, \u2021 (Snyder et al., 2009) for Chinese, and (Gillenwater et al., 2010) ",
"cite_spans": [
{
"start": 1413,
"end": 1435,
"text": "(Ganchev et al., 2009;",
"ref_id": "BIBREF12"
},
{
"start": 1436,
"end": 1454,
"text": "Rush et al., 2010;",
"ref_id": "BIBREF33"
},
{
"start": 1455,
"end": 1476,
"text": "Auli and Lopez, 2011)",
"ref_id": "BIBREF2"
},
{
"start": 1771,
"end": 1792,
"text": "(Naseem et al., 2010)",
"ref_id": "BIBREF32"
},
{
"start": 1808,
"end": 1829,
"text": "(Snyder et al., 2009)",
"ref_id": "BIBREF35"
},
{
"start": 1847,
"end": 1873,
"text": "(Gillenwater et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 1606,
"end": 1613,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "E.g. intransitive verb, transitive verb, ditransitive verb, subject-and object-control verb, adjective, adverb, preposition, relative pronoun, gerund, copula, subordinate conjunction, noun classifier, infinitive marker, cardinal number, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Similar to the dependency directions in \u00a72.1, the notations < and > are pointers to the syntactic head of the phrase. For example, DT > NN means that NN is the head and DT is its dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Tom Kwiatkowski, Michael Auli, Christos Christodoulopoulos, Alexandra Birch, Mark Granroth-Wilding, and Emily Thomforde (University of Edinburgh), Adam Lopez (Johns Hopkins University), and Michael Collins (Columbia University) for useful comments and discussion related to this work, and the three anonymous reviewers for their useful feedback. This research was funded by the Royal Thai Government Scholarship to Prachya Boonkwan and EU ERC Advanced Fellowship 249520 GRAMPLUS to Mark Steedman.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Die Syntaktische Konnexit\u00e4t. Polish Logic",
"authors": [
{
"first": "Kazimierz",
"middle": [],
"last": "Ajdukiewicz",
"suffix": ""
}
],
"year": 1935,
"venue": "",
"volume": "",
"issue": "",
"pages": "207--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazimierz Ajdukiewicz. 1935. Die Syntaktische Kon- nexit\u00e4t. Polish Logic, pages 207-231.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A variational Bayesian framework for graphical models",
"authors": [
{
"first": "Hagai",
"middle": [],
"last": "Attias",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hagai Attias. 2000. A variational Bayesian framework for graphical models. In Advances in Neural Infor- mation Processing Systems (NIPS 2000).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-2011",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Adam Lopez. 2011. A compari- son of loopy belief propagation and dual decompo- sition for integrated CCG supertagging and parsing. In Proceedings of ACL-2011, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "Speech Communication Papers for the 97th Meeting of the",
"volume": "",
"issue": "",
"pages": "547--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. K. Baker. 1979. Trainable grammars for speech recognition. In D. H. Klatt and J. J. Wolf, editors, Speech Communication Papers for the 97th Meet- ing of the Acoustical Society of America, pages 547- 550.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Quasi-Arithmetical Notation for Syntactic Description. Language",
"authors": [
{
"first": "Yehoshua",
"middle": [],
"last": "Bar-Hillel",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "29",
"issue": "",
"pages": "47--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yehoshua Bar-Hillel. 1953. A Quasi-Arithmetical No- tation for Syntactic Description. Language, 29:47- 58.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Prague dependency treebank: Threelevel annotation scenario",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bohomov\u00e0",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hajicova",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Hladka",
"suffix": ""
}
],
"year": 2001,
"venue": "Treebanks: Building and Using Syntactically Annotated Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Bohomov\u00e0, J. Hajic, E. Hajicova, and B. Hladka. 2001. The Prague dependency treebank: Three- level annotation scenario. In Anne Abeill\u00e9, editor, Treebanks: Building and Using Syntactically Anno- tated Corpora.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The TIGER Treebank",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER Treebank. In Proceed- ings Workshop on Treebanks and Linguistic Theo- ries.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "Erwin",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CoNLL-2006",
"volume": "",
"issue": "",
"pages": "149--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL-2006, pages 149-164.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Two experiments on learning probabilistic depedency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "Working Notes of the Workshop Statistically-Based NLP Techniques",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two ex- periments on learning probabilistic depedency gram- mars from corpora. In C. Weir, S. Abney, R. Grish- man, and R. Weischedel, editors, Working Notes of the Workshop Statistically-Based NLP Techniques, pages 1-13. AAAI Press, Menlo Park, CA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Logistic normal priors for unsupervised probabilistic grammar induction",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Gimpel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems 21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen, Kevin Gimpel, and Noah A. Smith. 2008. Logistic normal priors for unsupervised prob- abilistic grammar induction. In Advances in Neural Information Processing Systems 21.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semi-supervised learning of dependency parsers using generalized expectation criteria",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of 47th Annual Meeting of the Association of Computational Linguistics and the 4th IJCNLP of the AFNLP",
"volume": "",
"issue": "",
"pages": "360--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Druck, Gideon Mann, and Andrew McCal- lum. 2009. Semi-supervised learning of depen- dency parsers using generalized expectation criteria. In Proceedings of 47th Annual Meeting of the As- sociation of Computational Linguistics and the 4th IJCNLP of the AFNLP, pages 360-368, Suntec, Sin- gapore, August.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Posterior regularization for structured latent variable models",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, and Ben Taskar. 2009. Posterior regularization for structured latent variable models. Technical Report MS-CIS-09-16, University of Pennsylvania Depart- ment of Computer and Information Science.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Variational inference for Bayesian mixtures of factor analyses",
"authors": [
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Beal",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoubin Ghahramani and Matthew J. Beal. 2000. Vari- ational inference for Bayesian mixtures of factor analyses. In Advances in Neural Information Pro- cessing Systems (NIPS 2000).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sparsity in dependency grammar induction",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchez",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL-2010 Short Papers",
"volume": "",
"issue": "",
"pages": "194--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Gillenwater, Kuzman Ganchez, Joao Gra\u00e7a, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of ACL-2010 Short Papers, pages 194-199.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Prototype-driven grammar induction",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "881--888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven grammar induction. In Proceedings of 44th Annual Meeting of the Association for Computational Lin- guistics, pages 881-888.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving unsupervised dependency parsing with richer contexts and smoothing",
"authors": [
{
"first": "P",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Headden",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William P. Headden III, Mark Johnson, and David Mc- Closky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Pro- ceedings of the Conference of the North American Chapter of the Association for Computational Lin- guistics, Boulder, Colorado, June.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parsing with generative models of predicate-argument structure",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "359--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier. 2003. Parsing with generative models of predicate-argument structure. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 359-366, Sap- poro, Japan.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Theory of Probability",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jeffreys",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Jeffreys. 1948. Theory of Probability. Clarendon Press, Oxford, second edition.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Probability: deductive and inductive problems. Mind",
"authors": [
{
"first": "W",
"middle": [
"E"
],
"last": "Johnson",
"suffix": ""
}
],
"year": 1932,
"venue": "",
"volume": "41",
"issue": "",
"pages": "421--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. E. Johnson. 1932. Probability: deductive and in- ductive problems. Mind, 41:421-423.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Stylebook for the Japanese Treebank in VERBMOBIL",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kawata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bartels",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Kawata and J. Bartels. 2000. Stylebook for the Japanese Treebank in VERBMOBIL. Technical re- port, Eberhard-Karls-Universit\u00e4t T\u00fcbingen.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Chinese treebanks and grammar extraction",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Keh-Liann",
"suffix": ""
},
{
"first": "Yu-Ming",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of IJCNLP-2004",
"volume": "",
"issue": "",
"pages": "560--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Keh-Liann and Yu-Ming Hsieh. 2004. Chinese treebanks and grammar extraction. In Proceedings of IJCNLP-2004, pages 560-565.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Natural language grammar induction using a constituentcontext model",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems (NIPS 2001)",
"volume": "1",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001. Natu- ral language grammar induction using a constituent- context model. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Infor- mation Processing Systems (NIPS 2001), volume 1, pages 35-42. MIT Press.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Associations for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the 40th Associations for Computational Linguistics, pages 128-135.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency",
"authors": [],
"year": null,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corpus-based induction of syntactic structure: Mod- els of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Unsupervised Learning of Natural Language Structure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein. 2005. The Unsupervised Learning of Natu- ral Language Structure. Ph.D. thesis, Stanford Uni- versity, March.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dependency trees and the strong generative capacity of ccg",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "460--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Koller and Marco Kuhlmann. 2009. De- pendency trees and the strong generative capacity of ccg. In Proceedings of the 12th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 460-468, April.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Variational Bayesian grammar induction for natural language",
"authors": [
{
"first": "Kenichi",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "Taisuke",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2006,
"venue": "In International Colloquium on Grammatical Inference",
"volume": "",
"issue": "",
"pages": "84--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenichi Kurihara and Taisuke Sato. 2006. Variational Bayesian grammar induction for natural language. In International Colloquium on Grammatical Infer- ence, pages 84-96.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The estimation of stochastic context-free grammars using the insideoutside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the inside- outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Note on the general case of the Bayes-Laplace formula for inductive or a posteriori probabilities",
"authors": [
{
"first": "G",
"middle": [
"J"
],
"last": "Lidstone",
"suffix": ""
}
],
"year": 1920,
"venue": "Transactions of the Faculty of Actuaries",
"volume": "8",
"issue": "",
"pages": "182--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. J. Lidstone. 1920. Note on the general case of the Bayes-Laplace formula for inductive or a posteriori probabilities. Transactions of the Faculty of Actuar- ies, 8:182-192.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19:313-330.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Using universal linguistic knowledge to guide grammar induction",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP-2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowl- edge to guide grammar induction. In Proceedings of EMNLP-2010.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "On dual decomposition and linear programming relaxations for natural language processing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP-2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural lan- guage processing. In Proceedings of EMNLP-2010.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Nat- ural Language Text. Ph.D. thesis, Department of Computer Science, John Hopkins University.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Unsupervised multilingual grammar induction",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th ACL and the 4th IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Snyder, Tahira Naseem, and Regina Barzi- lay. 2009. Unsupervised multilingual grammar in- duction. In Proceedings of the Joint Conference of the 47th ACL and the 4th IJCNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "From baby steps to leapfrog: How \"less is more\" in unsupervised dependency parsing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2010. From baby steps to leapfrog: How \"less is more\" in unsupervised dependency parsing. In Proceedings of NAACL-HLT 2010.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Sizes and granularity of POS of multilingual corpora",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"3\">Languages Sentences POS Tags</td></tr><tr><td>WSJ10</td><td>7,422</td><td>36</td></tr><tr><td>Chinese10</td><td>52,424</td><td>28</td></tr><tr><td>Czech10</td><td>27,375</td><td>1,149</td></tr><tr><td>German10</td><td>13,473</td><td>51</td></tr><tr><td>Japanese10</td><td>12,884</td><td>77</td></tr></table>",
"num": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"10\">Effects of constraint sizes towards accuracy!</td></tr><tr><td/><td>90!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Depedency Accuracy!</td><td>30! 40! 50! 60! 70! 80!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>20!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>10!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0!</td><td>1!</td><td/><td/><td>3!</td><td>7!</td><td/><td/><td>14!</td><td>20!</td><td>28!</td></tr><tr><td colspan=\"2\">Directed!</td><td colspan=\"2\">40.2!</td><td colspan=\"2\">47.67!</td><td colspan=\"2\">67.64!</td><td/><td>68.2!</td><td>74.22!</td><td>74.59!</td></tr><tr><td colspan=\"4\">Undirected! 54.02!</td><td colspan=\"2\">59.59!</td><td colspan=\"2\">73.74!</td><td colspan=\"2\">73.58!</td><td>78.55!</td><td>79.11!</td></tr><tr><td colspan=\"11\">Figure 2: Effects of numbers of constraints on the</td></tr><tr><td colspan=\"11\">directed and undirected dependency accuracy. The</td></tr><tr><td colspan=\"11\">category penalty constant is fixed at 10 \u221215 .</td></tr><tr><td>Dependency Accuracy!</td><td>68! 70! 72! 74! 76! 78! 80!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>66!</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>64!</td><td>1!</td><td colspan=\"2\">-05! 1.00E</td><td>-10! 1.00E</td><td>-15! 1.00E</td><td colspan=\"2\">-20! 1.00E</td><td>-25! 1.00E</td><td>-30! 1.00E</td><td>-35! 1.00E</td><td>-40! 1.00E</td></tr><tr><td colspan=\"11\">Directed! 70.08! 73.79! 74.68! 74.75! 74.58! 74.51! 74.59! 74.62! 74.52!</td></tr><tr><td colspan=\"11\">Undirected! 74.41! 78.34! 79.2! 79.27! 79.13! 79.03! 79.11! 79.14! 79.04!</td></tr></table>",
"num": null
},
"TABREF3": {
"text": "Undirected and directed dependency accuracy of grammar induction on English Penn Treebank. The baseline for English is(Naseem et al., 2010).",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Undirected Precision Recall</td><td>F1</td><td colspan=\"2\">Directed Precision Recall</td><td>F1</td><td>Baseline Directed F1</td></tr><tr><td>WSJ10 (Sect. 23)</td><td>79.24</td><td colspan=\"2\">79.29 79.27</td><td>74.72</td><td colspan=\"2\">74.77 74.75</td><td>73.80</td></tr><tr><td>WSJ10 (10X)</td><td>79.59</td><td colspan=\"2\">79.65 79.62</td><td>75.44</td><td colspan=\"2\">75.50 75.47</td><td>-</td></tr><tr><td colspan=\"4\">tags are grouped to easily map to cross-linguistic</td><td/><td/></tr><tr><td colspan=\"4\">category classes, each class still contains a lot of</td><td/><td/></tr><tr><td colspan=\"4\">syntactic categories. Because we do not use inflec-</td><td/><td/></tr><tr><td colspan=\"4\">tional information in restraining the search space,</td><td/><td/></tr><tr><td colspan=\"4\">the data sparsity becomes significant in Czech and</td><td/><td/></tr><tr><td colspan=\"2\">therefore deteriorates the accuracy.</td><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF4": {
"text": "Top-10 over-generation and undergeneration in the English experiment (10X) when compared against Collins's gold standard",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Over-generation</td><td colspan=\"2\">Under-generation</td></tr><tr><td>Errors</td><td>Counts</td><td>Errors</td><td>Counts</td></tr><tr><td>RB &gt; VB</td><td>402</td><td>VBD &lt; IN</td><td>364</td></tr><tr><td>CD &lt; IN</td><td>200</td><td>DT &gt; NN</td><td>331</td></tr><tr><td>NN &lt; IN</td><td>197</td><td>VBD &lt; TO</td><td>283</td></tr><tr><td>RB &gt; VBN</td><td>188</td><td>VBD &lt; RB</td><td>275</td></tr><tr><td>NN &lt; TO</td><td>181</td><td>VBZ &lt; RB</td><td>244</td></tr><tr><td>NNP &gt; CD</td><td>180</td><td>IN &lt; NN</td><td>219</td></tr><tr><td>MD &gt; VB</td><td>166</td><td>JJ &gt; NN</td><td>203</td></tr><tr><td>NNS &lt; IN</td><td>149</td><td>MD &lt; RB</td><td>194</td></tr><tr><td>NNP &gt; NN</td><td>145</td><td>MD &lt; VB</td><td>185</td></tr><tr><td>NN &gt; NNP</td><td>141</td><td>NNP &gt; NNP</td><td>179</td></tr></table>",
"num": null
},
"TABREF5": {
"text": "for Czech, German, and Japanese.",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Undirected Precision Recall</td><td>F1</td><td colspan=\"2\">Directed Precision Recall</td><td>F1</td><td>Baseline directed F1</td></tr><tr><td>WSJ10 (10X) Chinese10 (10X) Czech10 (10X) German10 (10X) Japanese10 (10X)</td><td>79.59 68.80 59.04 65.13 75.65</td><td colspan=\"2\">79.65 79.62 68.88 68.84 61.94 60.46 65.20 65.17 78.97 77.27</td><td>75.44 62.21 53.27 56.68 67.11</td><td colspan=\"2\">75.50 75.47 62.29 62.25 55.88 54.54 56.74 56.71 70.05 68.55</td><td>73.80 \u2020 35.77 \u2021 54.70 47.40 60.80</td></tr></table>",
"num": null
}
}
}
}