{ "paper_id": "I17-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:01.140237Z" }, "title": "Dependency Parsing with Partial Annotations: An Empirical Comparison", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Soochow University", "location": { "settlement": "Suzhou", "country": "China" } }, "email": "" }, { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Soochow University", "location": { "settlement": "Suzhou", "country": "China" } }, "email": "" }, { "first": "Jun", "middle": [], "last": "Lang", "suffix": "", "affiliation": {}, "email": "langjun.lj@alibaba-inc.com" }, { "first": "Qingrong", "middle": [], "last": "Xia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Soochow University", "location": { "settlement": "Suzhou", "country": "China" } }, "email": "qrxia@stu.suda.edu" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Soochow University", "location": { "settlement": "Suzhou", "country": "China" } }, "email": "minzhang@suda.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LL-GPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.", "pdf_parse": { "paper_id": "I17-1006", "_pdf_hash": "", "abstract": [ { "text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LL-GPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Traditional supervised approaches for structural classification assume full annotation (FA), meaning that the training instances have complete manually-labeled structures. In the case of dependency parsing, FA means a complete parse tree is provided for each training sentence. However, recent studies suggest that it is more economic and effective to construct labeled data with partial annotation (PA). A lot of research effort has been attracted to obtain partially-labeled data for different * Correspondence author $ 0 I 1 saw 2 Sarah 3 with 4 a 5 telescope 6 Figure 1 : An example partial tree, where only the heads of \"saw\" and \"with\" are given.", "cite_spans": [], "ref_spans": [ { "start": 565, "end": 573, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tasks via active learning (Sassano and Kurohashi, 2010; Mirroshandel and Nasr, 2011; Li et al., 2012; Marcheggiani and Arti\u00e8res, 2014; Flannery and Mori, 2015; Li et al., 2016) , cross-lingual syntax projection (Spreyer and Kuhn, 2009; Ganchev et al., 2009; Jiang et al., 2010; Li et al., 2014) , or mining natural annotation implicitly encoded in web pages (Jiang et al., 2013; Liu et al., 2014; Nivre et al., 2014; Yang and Vozila, 2014) . Figure 1) gives an example sentence partially annotated with two dependencies. However, there still lacks systematic study on how to build dependency parsers with PA. Most previous works listed above rely on ad-hoc strategies designed for only basic dependency parsers. One exception is that Li et al. (2014) convert partial trees into forests and train a traditional log-linear graphbased dependency parser (LLGPar) with PA based on a forest-based objective, showing promising results. Meanwhile, it is still unclear how PAs can be used by other main-stream dependency parsers, such as the traditional linear graph-based parser (LGPar) and transition-based parser (LT-Par), and the newly proposed biaffine neural network graph-based parser (Biaffine) (Dozat and Manning, 2017) and globally normalized neural network transition-based parser (GN3Par) (Andor et al., 2016) .", "cite_spans": [ { "start": 26, "end": 55, "text": "(Sassano and Kurohashi, 2010;", "ref_id": "BIBREF32" }, { "start": 56, "end": 84, "text": "Mirroshandel and Nasr, 2011;", "ref_id": "BIBREF26" }, { "start": 85, "end": 101, "text": "Li et al., 2012;", "ref_id": "BIBREF15" }, { "start": 102, "end": 134, "text": "Marcheggiani and Arti\u00e8res, 2014;", "ref_id": "BIBREF21" }, { "start": 135, "end": 159, "text": "Flannery and Mori, 2015;", "ref_id": "BIBREF9" }, { "start": 160, "end": 176, "text": "Li et al., 2016)", "ref_id": "BIBREF17" }, { "start": 211, "end": 235, "text": "(Spreyer and Kuhn, 2009;", "ref_id": "BIBREF33" }, { "start": 236, "end": 257, "text": "Ganchev et al., 2009;", "ref_id": "BIBREF10" }, { "start": 258, "end": 277, "text": "Jiang et al., 2010;", "ref_id": "BIBREF12" }, { "start": 278, "end": 294, "text": "Li et al., 2014)", "ref_id": "BIBREF16" }, { "start": 358, "end": 378, "text": "(Jiang et al., 2013;", "ref_id": "BIBREF13" }, { "start": 379, "end": 396, "text": "Liu et al., 2014;", "ref_id": "BIBREF18" }, { "start": 397, "end": 416, "text": "Nivre et al., 2014;", "ref_id": "BIBREF28" }, { "start": 417, "end": 439, "text": "Yang and Vozila, 2014)", "ref_id": "BIBREF36" }, { "start": 442, "end": 451, "text": "Figure 1)", "ref_id": null }, { "start": 734, "end": 750, "text": "Li et al. (2014)", "ref_id": "BIBREF16" }, { "start": 1194, "end": 1219, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF6" }, { "start": 1292, "end": 1312, "text": "(Andor et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper aims to thoroughly study this issue and make systematic comparison on different approaches for dependency parsing with PA. In sum-mary, we make the following contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present a general framework for directly training GN3Par, LGPar and LTPar with PA based on constrained decoding. The basic idea is to use the current feature weights to parse the sentence under the PA-constrained search space, and use the best parse as a pseudo gold-standard reference for feature weight update during perceptron training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We also implement the forest-objective based approach of Li et al. (2014) for the two CRF parsers, i.e., LLGPar and Biaffine.", "cite_spans": [ { "start": 59, "end": 75, "text": "Li et al. (2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We have made thorough comparison among different directly-train approaches under three different settings for simulating PA, i.e., random dependencies, most uncertain dependencies, and dependencies with divergent outputs from the five parsers. We have also compared the proposed directly-train approaches with the straightforward completethen-train approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Extensive experiments lead to several interesting and clear findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given an input sentence x = w 0 w 1 ...w n , a dependency tree comprises a set of dependencies,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "2" }, { "text": "namely d = {i \u21b7 j : 0 \u2264 i \u2264 n, 1 \u2264 j \u2264 n},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "2" }, { "text": "where i \u21b7 j is a dependency from a head word i to a modifier word j. A complete dependency tree contains n dependencies, namely |d| = n, whereas a partial dependency tree contains less than n dependencies, namely |d| < n. Alternatively, FA can be understood as a special form of PA. For clarity, we denote a complete tree as d and a partial tree as d p . The decoding procedure aims to find an optimal complete tree d * :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "d * = arg max d\u2208Y(x) Score(x, d; w)", "eq_num": "(1)" } ], "section": "Dependency Parsing", "sec_num": "2" }, { "text": "where Y(x) defines the search space containing all legal trees for x and w is the model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "2" }, { "text": "The graph-based method factorizes the score of a dependency tree into those of small subtrees p:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Score(x, d; w) = \u2211 p\u2286d Score(x, p; w)", "eq_num": "(2)" } ], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "Dynamic programming based exact search are usually applied to find the optimal tree (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010) . Biaffine belongs to the first-order model and only incorporates scores of single dependencies. In contrast, for LLGPar and LGPar, we follow Li et al. (2014) and adopt the second-order model of McDonald and Pereira (2006) considering scores of single dependencies and adjacent siblings. Biaffine and LLGPar both belong to CRF parser. Please note that the original Biaffine is locally trained on each word. In this work, we follow Ma and Hovy (2017) and add a global CRF loss in the projective case, in order to directly use the proposed approach of Li et al. (2014) . In other words, we extend the original Biaffine Parser described in Dozat and Manning (2017) by adding a CRF layer. Under the CRF model, the conditional probability of d given x is:", "cite_spans": [ { "start": 84, "end": 107, "text": "(McDonald et al., 2005;", "ref_id": "BIBREF23" }, { "start": 108, "end": 135, "text": "McDonald and Pereira, 2006;", "ref_id": "BIBREF24" }, { "start": 136, "end": 151, "text": "Carreras, 2007;", "ref_id": "BIBREF2" }, { "start": 152, "end": 174, "text": "Koo and Collins, 2010)", "ref_id": "BIBREF14" }, { "start": 317, "end": 333, "text": "Li et al. (2014)", "ref_id": "BIBREF16" }, { "start": 370, "end": 397, "text": "McDonald and Pereira (2006)", "ref_id": "BIBREF24" }, { "start": 606, "end": 624, "text": "Ma and Hovy (2017)", "ref_id": "BIBREF19" }, { "start": 725, "end": 741, "text": "Li et al. (2014)", "ref_id": "BIBREF16" }, { "start": 812, "end": 836, "text": "Dozat and Manning (2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "p(d|x; w) = e Score(x,d;w) \u2211 d \u2032 \u2208Y(x) e Score(x,d \u2032 ;w) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "For training, w is optimized using gradient descent to maximize the likelihood of the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "Biaffine uses a neural network to compute the score of each dependency. First, the input word and POS tag sequence are fully encoded with two BiLSTM layers. Then, two MLPs are applied to each word position i to obtain two word representations, i.e., r h i (w i as head) r m i (w i as modifier). Finally, a biaffine classifier predicts the score of an arbitary dependency i \u21b7 j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "score(i \u21b7 j) = r h i \u2022 W \u2022 r m j + r h i \u2022 V (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "where W (matrix) and V (vector) are the biaffine parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "LLGPar is a traditional discrete feature based model, which defines the score of a tree as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "Score(x, d; w) = w \u2022 f (x, d) (5) f (x, d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": ") is a sparse accumulated feature vector corresponding to d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "LGPar uses perceptron-like online training to directly learn w. The workflow is similar to Algorithm 1, except that the gold-standard reference d + is directly provided in the training data without the need of constrained decoding in line 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "Algorithm 1 Perceptron training based on constrained decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "1: Input: Partially labeled data D = {(xj, d p j )} N j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "; Output: w; 2: Initialization: w (0) = 0, k = 0 3: for i = 1 to I do // iterations 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "for (xj, d p j ) \u2208 D do // traverse 5: d \u2212 = arg max d\u2208Y(x j ) Score(xj, d; w) // Unconstrained decoding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "LGPar 6: a \u2212 = arg max a\u2192d\u2208Y(x j ) Score(xj, a \u2192 d; w) // Unconstrained decoding: LTPar 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "d + = arg max d\u2208Y(x j ,d p j ) Score(xj, d; w) // Constrained decoding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "LGPar 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "a + = arg max a\u2192d\u2208Y(x j ,d p j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "Score(xj, a \u2192 d; w) // Constrained decoding: LTPar 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "w k+1 = w k + f (x, d + ) \u2212 f (x, d \u2212 ) // Update:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "LGPar 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "w k+1 = w k + f (x, a + ) \u2212 f (x, a \u2212 ) // Update: LTPar 11: k = k + 1 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "end for 13: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph-based Approach", "sec_num": "2.1" }, { "text": "The transition-based method builds a dependency by applying sequence of shift/reduce actions a, and factorizes the score of a tree into the sum of scores of each action in a (Yamada and Matsumoto, 2003; Nivre, 2003; Zhang and Nivre, 2011) :", "cite_spans": [ { "start": 174, "end": 202, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF35" }, { "start": 203, "end": 215, "text": "Nivre, 2003;", "ref_id": "BIBREF27" }, { "start": 216, "end": 238, "text": "Zhang and Nivre, 2011)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "Score(x, d; w) = Score(x, a \u2192 d; w) = \u2211 |a| i=1 Score(x, c i , a i ; w) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "where a i is the action taken at step i and c i is the configuration status after taking action a 1 ...a i\u22121 . Transition-based methods use inexact beam search to find a highest-scoring action sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "GN3Par uses a neural network to predict scores of different actions given a state (Chen and Manning, 2014; Andor et al., 2016) . First, 48 atomic features are embeded and concatenated as the input layer. Then, two hidden layers are applied to get the scores of all feasible actions. Unlike the traditional perceptron-like training, which only considers the best action sequence in the beam and the gold-standard sequence, their idea of global normalization is to approximately compute the probabilities of all the sequences in the beam to obtain a global CRF-like loss.", "cite_spans": [ { "start": 82, "end": 106, "text": "(Chen and Manning, 2014;", "ref_id": "BIBREF3" }, { "start": 107, "end": 126, "text": "Andor et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "LTPar is a traditional discrete feature based model like LLGPar and LGPar, and adopts global perceptron-like training to learn the feature weights w. We build an arc-eager transitionbased dependency parser and features described in Zhang and Nivre (2011) .", "cite_spans": [ { "start": 232, "end": 254, "text": "Zhang and Nivre (2011)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "3 Directly training parsers with PA As described in Li et al. (2014) , CRF parsers such as LLGPar and Biaffine can naturally learn from PA based on the idea of ambiguous labeling, which allows a sentence to have multiple parse trees (forest) as its gold-standard reference (Riezler et al., 2002; Dredze et al., 2009; T\u00e4ckstr\u00f6m et al., 2013) . First, a partial tree d p is converted into a forest by adding all possible dependencies pointing to remaining words without heads, with the constraint that a newly added dependency does not violate existing ones in d p . The forest can be formally defined as", "cite_spans": [ { "start": 52, "end": 68, "text": "Li et al. (2014)", "ref_id": "BIBREF16" }, { "start": 273, "end": 295, "text": "(Riezler et al., 2002;", "ref_id": "BIBREF31" }, { "start": 296, "end": 316, "text": "Dredze et al., 2009;", "ref_id": "BIBREF7" }, { "start": 317, "end": 340, "text": "T\u00e4ckstr\u00f6m et al., 2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "F(x, d p ) = {d : d \u2208 Y(x), d p \u2286 d},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "whose conditional probability is the sum of probabilities of all trees that it contains:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(d p |x; w) = \u2211 d\u2208F (x,d p ) p(d|x; w)", "eq_num": "(7)" } ], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "Then, we can define a forest-based training objective function to maximize the likelihood of training data as described in Li et al. (2014) .", "cite_spans": [ { "start": 123, "end": 139, "text": "Li et al. (2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "LGPar can be extended to directly learn from PA based on the idea of constrained decoding, as shown in Algorithm 1, which has been previously applied to Chinese word segmentation with partially labeled sequences (Jiang et al., 2010) . The idea is using the best tree d + in the constrained search space Y(x j , d p j ) (line 7) as a pseudo goldstandard reference for weight update. In traditional perceptron training, d + would be a complete parse tree provided in the training data. It is trivial to implement constrained decoding for graph-based parsers, and we only need to disable some illegal combination operations during dynamic programming.", "cite_spans": [ { "start": 212, "end": 232, "text": "(Jiang et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "LTPar can also directly learn from PA in a similar way, as shown in Algorithm 1. Constrained decoding is performed to find a pseudo gold-standard reference (line 8). It is more complicate to design constrained decoding for transition-based parsing than graph-based parsing. Fortunately, Nivre et al. (2014) propose a constrained decoding procedure for the arc-eager parsing system. We ignore the details due to the space limitation.", "cite_spans": [ { "start": 287, "end": 306, "text": "Nivre et al. (2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "GN3Par learns from PA in a similar manner with LTPar. The difference is that for each sentence, all the sequences in beam are used as negative examples in Line 6, and a + obtained in Line 8 as gold-standard. Then, the global loss is computed in the same way with the case of FA. 1 Meanwhile, since GN3Par uses the arc-standard transition system, we also develop a constrained decoding procedure for the arc-standard system, which will be released as supporting documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-based Approach", "sec_num": "2.2" }, { "text": "Data. We conduct experiments on Penn Treebank (PTB), and follow the standard data splitting (sec 2-21 as training, sec 22 as development, and sec 23 as test). Original bracketed structures are converted into dependency structures using Penn2Malt with default head-finding rules. We build a CRF-based bigram part-ofspeech (POS) tagger to produce automatic POS tags for all train/dev/test data (10-way jackknifing on training data), with tagging accuracy 97.3% on test data. As suggested by an earlier anonymous reviewer, we further split the training data into two parts. We assume that the first 1K training sentences are provided as a small-scale data with FA, which can be obtained by a small amount of manual annotation or through cross-lingual projection methods. We simulate PA for the remaining 39K sentences. Table 1 shows the data statistics.", "cite_spans": [], "ref_spans": [ { "start": 816, "end": 823, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Parameter settings. We implement all five parsers from scratch using C++, which will be released publicly in the future. For those who are immediately interested, please contact us. We train LLGPar with stochastic gradient descent (Finkel et al., 2008) . For LTPar and GN3Par, the beam size is 64 and the standard early update is adopted during training (Collins, 2002) . For", "cite_spans": [ { "start": 231, "end": 252, "text": "(Finkel et al., 2008)", "ref_id": "BIBREF8" }, { "start": 354, "end": 369, "text": "(Collins, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "LGPar and LTPar, averaged perceptron is adopted (Collins, 2002) .", "cite_spans": [ { "start": 48, "end": 63, "text": "(Collins, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For Biaffine, we directly adopt most hyperparameters of the released code of Dozat and Manning (2017), only removing the components related with dependency labels, since we focus on unlabeled dependency parsing in this work. The LSTM (two forward plus two backward) layers all use 300-dimension hidden cells. Dropout with ratio of 0.75 is applied to most layers before output. The two MLPs both have 100-dimension outputs without hidden layer. Adam optimization is adopted with \u03b1 1 = \u03b1 2 = 0.9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For GN3Par, we follow Daniel et al. 2016, and use two 1024 \u00d7 1024 hidden layers, and adopt momentum (ratio of 0.9) Adam optimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For both Biaffine and GN3Par, we set the embedding dimension of both word/tag to 100, and use the GloVe pretrained word embedding for initialization 2 , and randomly initialize embeddings of POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Since we have two sets of training data, we adopt the simple corpus-weighting strategy of Li et al. (2014) . In each iteration, we merge train-1K and a subset of random 10K sentences from train-39K, shuffle them, and then use them for training. For all parsers, training terminates when the peak parsing accuracy on dev data does not improve in 30 consecutive iterations.", "cite_spans": [ { "start": 90, "end": 106, "text": "Li et al. (2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For evaluation metrics, we use the standard unlabeled attachment score (UAS) excluding punctuation marks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In order to simulate PA for each sentence in train-39K, we only keep \u03b1% gold-standard dependencies (not considering punctuation marks), and remove all other dependencies. We experiment with three simulation settings to fully investigate the capability of different approaches in learning from PA. Random (30% or 15%): 3 For each sentence in train-39K, we randomly select \u03b1% words, and only keep dependencies linking to these words. Table 3 : UAS on dev data: parsers are directly trained on train-1K with FA and train-39K with PA. \"FA (random) \u03b1%\" means randomly selecting \u03b1% sentences with FA from train-39K for training. Numbers in parenthesis are the accuracy gap from the second column \"FA (100%)\".", "cite_spans": [], "ref_spans": [ { "start": 432, "end": 439, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "With this setting, we aim to purely study the issue without biasing to certain structures. This setting may be best fit the scenario automatic syntax projection based on bitext, where the projected dependencies tend to be arbitrary (and noisy) due to the errors in automatic source-language parses and word alignments and non-isomorphism syntax between languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "Uncertain (30% or 15%): In their work of active learning with PA, Li et al. (2016) show that the marginal probabilities from LLGPar is the most effective uncertainty measurement for selecting the most informative words to be annotated. Following their work, we first train LLGPar on train-1K with FA, and then use LLGPar to parse train-39K and select \u03b1% most uncertain words. We use the gold-standard heads of the selected words as PAs for model training.", "cite_spans": [ { "start": 66, "end": 82, "text": "Li et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "Following Li et al. (2016) , we measure the uncertainty of a word w i according to the marginal probability gap between its two most likely heads h 0 i and h 1 i .", "cite_spans": [ { "start": 10, "end": 26, "text": "Li et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Uncertainty(x, i) = p(h 0 i \u21b7 i|x) \u2212 p(h 1 i \u21b7 i|x)", "eq_num": "(8)" } ], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "This setting fits the scenario of active learning, which aims to save annotation effort by only annotating the most useful structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "Divergence (21.33%): We train all five parsers on train-1K, and use them to parse train-39K. If their output trees do not assign the same head to a word, then we keep the gold-standard dependency pointing to the word, leading to 21.33% remaining dependencies. This setting fits to the tri-training scenario investigated in Li et al. (2014) .", "cite_spans": [ { "start": 323, "end": 339, "text": "Li et al. (2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Three settings for simulating PA on train-39K", "sec_num": "4.1" }, { "text": "We train the five parsers on all the training data with FA. We also employ four publicly available parsers with their default settings. BerkeleyParser (v1.7) is a constituent-structure parser, whose results are converted into dependency structures (Petrov and Klein, 2007) . TurboParser (v2.1.0) is a linear graph-based dependency parser using linear programming for inference (Martins et al., 2013) . Mate-tool (v3.3) is a linear graph-based dependency parser very similar to our implemented", "cite_spans": [ { "start": 248, "end": 272, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF30" }, { "start": 377, "end": 399, "text": "(Martins et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Results of different parsers trained on FA", "sec_num": "4.2" }, { "text": "LGPar (Bohnet, 2010) . ZPar (v0.6) is a linear transition-based dependency parser very similar to our implemented LTPar (Zhang and Clark, 2011) . The results are shown in Table 2 .", "cite_spans": [ { "start": 6, "end": 20, "text": "(Bohnet, 2010)", "ref_id": "BIBREF1" }, { "start": 120, "end": 143, "text": "(Zhang and Clark, 2011)", "ref_id": "BIBREF37" } ], "ref_spans": [ { "start": 171, "end": 178, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results of different parsers trained on FA", "sec_num": "4.2" }, { "text": "We can see that the five parsers that we adopt achieve competitive parsing accuracy and serve as strong baselines. Especially, the recently proposed neural network Biaffine outperforms other parser by more than 1%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of different parsers trained on FA", "sec_num": "4.2" }, { "text": "The five parsers are directly trained on train-1K with FA and train-39K with PA based on the methods described in Section 3. Table 3 shows the results.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "Comparing the five parsers, we have several clear findings. (1) LLGPar is the most effective in directly learning from PA since its accuracy drop is the smallest over all PA settings compared with FA (100%). (2) Although Biaffine achieves best accuracy over all settings, thanks to its strong performance under the basic FA setting, we find that the accuracy gap between LLGPar and Biaffine becomes much smaller with PA than with FA. This also indicates that LLGPar is more effective in directly learning from PA. (3) LTPar achieves the lowest accuracy over all settings, especially on PA under uncertain (30%, 15%) and divergence. It is also clear that the accuracy declines the largest on these three settings, compared with FA (100%). FA (random) vs. PA (random): 4 from the results in the two major columns, we can see that LLGPar achieves higher accuracy by about 0.5% when trained on sentences with \u03b1% random dependencies than when trained on \u03b1% random sentences with FA. This is reasonable and can be explained under the assumption that LLGPar can make full use of PA in model training. In fact, in both cases, the training data contains approximately the same number of annotated dependencies. However, from the perspective of model training, given some dependencies in the case of PA, more information about the syntactic structure can be derived. 5 Taking Figure 1 as an example, \"I 1 \" can only modify \"saw 2 \" due to the single-root and singlehead constraints; similarly, \"Sarah 3 \" can only modify either \"saw 2 \" or \"with 2 \"; and so on. More-over, since LLGPar is a second-order model, the presence of certain dependencies can directly affect the choice of other dependencies through the scores of adjacent siblings. Therefore, given the same amount of annotated dependencies, random PA contains more syntactic information than random FA, which explains why LLGPar performs better with PA than FA.", "cite_spans": [ { "start": 1357, "end": 1358, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 1366, "end": 1374, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "In contrast, all other four parsers achieve lower accuracy with PA than with FA. Biaffine differs from LLGPar in being a first-order model, and thus cannot fully utilize PA by considering sibling scores. The problem of LGPar may lie in the perceptron training with constrained decoding, which only considers a single best tree that complies with the given PA as gold-standard (Line 7 in Algorithm 1), unlike the forest-based objective of LLGPar that consider all trees weighted with probabilities. Both GN3Par and LTPar suffer from the inexact search problem. In other words, the approximate beam search can cause the correct tree drops off the beam too soon due to lower scores for earlier actions, and thus return a bad a + that causes the model be updated to bias to wrong structures (Line 8 in Algorithm 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "PA (random) vs. PA (uncertain): 6 we can see that all five parsers achieve much higher accuracy in the latter case. 7 The annotated dependencies in PA (uncertain) are most uncertain ones for current statistical parser (i.e., LLGPar), and thus are more helpful for training the models than those in PA (random). Another phenomenon is that, in the case of PA (uncertain), increasing \u03b1% = 15% to 30% actually doubles the number of annotated dependencies, but only boost accuracy of LLGPar by 93.02 \u2212 92.44 = 0.58%, which indicates that newly added 15% dependencies are much less useful since the model can already well handle these low-uncertainty dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "PA (uncertain, 30%) vs. PA (divergence): 8 we can see that the all five parsers achieve similar parsing accuracies under the two settings. This indicates that the divergence strategy can find very useful dependencies for all parsers, whereas uncertainty measurement based on LLGPar might be biased towards itself to a certain extent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "In summary, we can conclude from the results that LLGPar is the most effective in directly learning from PA among all five parsers, due to both the second-order modeling and the forest-based training objective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the directly-train approaches", "sec_num": "4.3" }, { "text": "The most straight-forward method for learning from PA is the complete-then-learn method (Mirroshandel and Nasr, 2011) . The idea is first using an existing parser to complete partial trees in train-39K into full trees based on constrained decoding, and then training the target parser on train-1K with FA and train-39K with completed FA.", "cite_spans": [ { "start": 88, "end": 117, "text": "(Mirroshandel and Nasr, 2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "Results of completing via constrained decoding: Table 4 reports UAS of the completed trees on train-39K using two different strategies for completion. \"No constraints (0%)\" means that train-39K has no annotated dependencies and normal decoding without constraints is used. In the remaining columns, each parser performs constrained decoding on PA where \u03b1% dependencies are provided in each sentence.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "\u2022 Coarsely-trained-self for completion: We complete PA into FA using corresponding parsers coarsely trained on only train-1K with FA. We call these parsers Biaffine-1K, LLGPar-1K, LGPar-1K, GN3Par-1K, LTPar-1K respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "\u2022 Fine-trained-LLGPar for completion: We complete PA into FA using LLGPar fine trained on both train-1K with FA and train-39K with PA. We call this LLGPar as LLGPar-1K+39K. Please note that LLGPar-1K+39K actually performs closed test in this setting, meaning that it parses its training data. For example, LLGPar-1K+39K trained on random (30%) is employed to complete the same data by filling the remaining 70% dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "\u2022 Fine-trained-Biaffine for completion: This is the same with the case of \"Fine-trained-LLGPar\", except that we replace LLGParser with Biaffine. We call the resulting parser as Biaffine-1K+39K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "Comparing the five parsers trained on train-1K, we can see that constrained decoding has similar effects on all five parsers, and is able to return much more accurate trees. Numbers in parenthesis show the accuracy gap between unconstrained 0% and constrained decoding. This suggests that constrained decoding itself is not responsible for the ineffectiveness of Algorithm 1 for other parsers, especially LTPar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "Comparing the results of LLGPar-1K and LLGPar-1K+39K, it is obvious that the latter produces much better full trees since the fine-trained LLGPar can make extra use of PA in train-39K during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "LLGPar-1K+39K and Biaffine-1K+39K achieve similar accuracies. We choose to use the former for completion since LLGPar is the most effective in both learning from PA and completing PA, as indicated by the results in Table 3 and 4 .", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 228, "text": "Table 3 and 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "Results of training on completed FA: Table 5 compares performance of the five parsers trained on train-1K with FA and train-39K with completed FA, from which we can draw several clear and interesting findings. First, different from the case of directly training on PA, the accuracy gaps among the five parsers become much more stable when trained on data with completed FA in both completion settings. Second, using parsers coarsely-trained on train-1K for completion leads to very bad performance, which is even much worse than those of the directly-train method in Table 3 except for LTPar with uncertain (30%) and divergence. Third, using the fine-trained LLGPar-1K+39K for completion makes LGPar and LTPar achieve nearly the same accuracies with LLGPar, which may be because LLGPar provides complementary effects during completion, analogous to the scenario of co-training. Table 6 : UAS on test data: comparison of the directly-train and complete-then-train methods.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 5", "ref_id": null }, { "start": 567, "end": 574, "text": "Table 3", "ref_id": null }, { "start": 878, "end": 885, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results of the complete-then-train methods", "sec_num": "4.4" }, { "text": "complete-then-train Table 6 reports UAS on the test data of parsers directly trained on train-1K with FA and train-39K with PA, and of those trained on train-1K with FA and train-39K with FA completed by fine-trained LLGPar-1K+39K. The results are consistent with the those on dev data in Table 3 and 5. Comparing the two settings, we can draw two interesting findings. First, LLGPar performs slightly better with the directly-train method. Second, the two settings lead to very similar performance on Biaffine, without a clear trend. Third, LGPar performs slightly better with the complete-then-train method in most cases except for uncertain (30%). Four, GN3Par and LTPar perform much better with the complete-then-train method.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 6", "ref_id": null }, { "start": 289, "end": 296, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results on test data: directly-train vs.", "sec_num": "4.5" }, { "text": "In parsing community, most previous works adopt ad-hoc methods to learn from PA. Sassano and Kurohashi (2010) , Jiang et al. (2010) , and Flannery and Mori (2015) convert partially annotated instances into local dependency/non-dependency classification instances, which may suffer from the lack of non-local correlation between dependencies in a tree. Mirroshandel and Nasr (2011) and Majidi and Crane (2013) adopt the complete-then-learn method. They use parsers coarsely trained on ex-isting data with FA for completion via constrained decoding. However, our experiments show that this leads to dramatic decrease in parsing accuracy. Nivre et al. (2014) present a constrained decoding procedure for arc-eager transition-based parsers. However, their work focuses on allowing their parser to effectively exploit external constraints during the evaluation phase. In this work, we directly employ their method and show that constrained decoding is effective for LTPar and thus irresponsible for its ineffectiveness in learning PA.", "cite_spans": [ { "start": 81, "end": 109, "text": "Sassano and Kurohashi (2010)", "ref_id": "BIBREF32" }, { "start": 112, "end": 131, "text": "Jiang et al. (2010)", "ref_id": "BIBREF12" }, { "start": 636, "end": 655, "text": "Nivre et al. (2014)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "Directly learning from PA based on constrained decoding is previously proposed by Jiang et al. (2013) for Chinese word segmentation, which is treated as a character-level sequence labeling problem. In this work, we first apply the idea to", "cite_spans": [ { "start": 82, "end": 101, "text": "Jiang et al. (2013)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "LGPar and LTPar for directly learning from PA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "Directly learning from PA based on a forestbased objective in LLGPar is first proposed by Li et al. (2014) , inspired by the idea of ambiguous labeling. Similar ideas have been extensively explored recently in sequence labeling tasks (Liu et al., 2014; Yang and Vozila, 2014; Marcheggiani and Arti\u00e8res, 2014) . Hwa (1999) pioneers the idea of exploring PA for constituent grammar induction based on a variant Inside-Outside re-estimation algorithm (Pereira and Schabes, 1992) . Clark and Curran (2006) propose to train a Combinatorial Categorial Grammar parser using partially labeled data only containing predicate-argument dependencies. Mielens et al. (2015) propose to impute missing dependencies based on Gibbs sampling in order to enable traditional parsers to learn from partial trees.", "cite_spans": [ { "start": 90, "end": 106, "text": "Li et al. (2014)", "ref_id": "BIBREF16" }, { "start": 234, "end": 252, "text": "(Liu et al., 2014;", "ref_id": "BIBREF18" }, { "start": 253, "end": 275, "text": "Yang and Vozila, 2014;", "ref_id": "BIBREF36" }, { "start": 276, "end": 308, "text": "Marcheggiani and Arti\u00e8res, 2014)", "ref_id": "BIBREF21" }, { "start": 311, "end": 321, "text": "Hwa (1999)", "ref_id": "BIBREF11" }, { "start": 448, "end": 475, "text": "(Pereira and Schabes, 1992)", "ref_id": "BIBREF29" }, { "start": 478, "end": 501, "text": "Clark and Curran (2006)", "ref_id": "BIBREF4" }, { "start": 639, "end": 660, "text": "Mielens et al. (2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "This paper investigates the problem of dependency parsing with partially labeled data. Particularly, we focus on the realistic scenario where we have a small-scale training dataset with FA and a largescale training dataset with PA. We experiment with three settings for simulating PA and compare several directly-train and complete-then-train approaches with five mainstream parsers, i.e., Biaffine, LLGPar, LGPar, GN3Par and LTPar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Based on this work, we may draw the following conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "\u2022 For the complete-then-train approach, using parsers coarsely trained on small-scale data with FA for completion leads to unsatisfactory results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "\u2022 LLGPar is the most effective in directly learning from PA due to both its secondorder modeling and probabilistic forest-based training objective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "\u2022 All other four parsers are less effective in directly learning from PA, but can achieve their best performance with the complete-thentrain approach where PAs are completed into FAs by LLGPar fine-trained on all FA+PA data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "However, as our reviewers kindly point out, more extensive experiments and systematic analysis are needed to really understand this interesting issue and provide stronger findings, which we leave for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We have also tried to use all sequences in the beam in Line 8 as gold-standard, instead of the best a + , considering that there may be many gold-standard references in the case of PA. However, the accuracies become lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/projects/ glove/3 We choose 15% since the parsers achieve about 85% UAS when trained on train-1K (seeTable 4). Then 30% aim to find the effect of different levels of supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These two settings should give the clearest evidence whether a parser can effectively learn from PAs. Under the same \u03b1%, although containing approximately the same number of dependencies, PA certainly provide more syntactic information than FA, since 1) it is more expensive to annotate PA than FA in the terms of annotation time per dependency; 2) in PA, partially annotated dependencies can provide strong constraints on the remaining undecided dependencies. Therefore, we assume that a parser is effectively in learning from PA if it can achieve at least higher accuracy under PA.5 Also, as suggested in the work ofLi et al. (2016), annotating PA is more time-consuming than annotating FA in terms of averaged time for each dependency, since dependencies in the same sentence are correlated and earlier annotated dependencies usually make later annotation easier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "From the idea of active learning, we know that annotating the most informative dependencies as more training data can help models best. So, we select the most uncertain dependencies and compare the result on the setting with randomlyselected dependencies.7 The only exception is LTPar with 30% PA, the accuracy increases by only 91.35 \u2212 91.12 = 0.23%, which may be caused by the ineffectiveness of LTPar in learning from PA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Selecting uncertain dependencies according to LLGPar may cause the resulting data to be biased to LLGPar. Therefore, we consider the divergence among all parsers for selection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to Jiayuan Chao for her earlier-stage experiments on this work, and to Wenliang Chen for the helpful discussions. This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61373095, 61502325).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Globally normalized transition-based neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Presta", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "2442--2452", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally nor- malized transition-based neural networks. In Pro- ceedings of ACL, pages 2442-2452.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Top accuracy and fast dependency parsing is not a contradiction", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2010. Top accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of COLING, pages 89-97.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Experiments with a higherorder projective dependency parser", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP/CoNLL", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Carreras. 2007. Experiments with a higher- order projective dependency parser. In Proceedings of EMNLP/CoNLL, pages 141-150.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "740--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Conference on Empirical Methods in Nat- ural Language Processing, pages 740-750.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Partial training for a lexicalized-grammar parser", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the NAACL", "volume": "", "issue": "", "pages": "144--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and James Curran. 2006. Partial training for a lexicalized-grammar parser. In Proceedings of the Human Language Technology Conference of the NAACL, pages 144-151.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP 2002", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and exper- iments with perceptron algorithms. In Proceedings of EMNLP 2002, pages 1-8.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep biaffine attention for neural dependecy parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependecy pars- ing. In ICLR.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sequence learning from data with multiple labels", "authors": [ { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Partha", "middle": [ "Pratim" ], "last": "Talukdar", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" } ], "year": 2009, "venue": "ECML/PKDD Workshop on Learning from Multi-Label Data", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Dredze, Partha Pratim Talukdar, and Koby Cram- mer. 2009. Sequence learning from data with multi- ple labels. In ECML/PKDD Workshop on Learning from Multi-Label Data.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficient, feature-based, conditional random field parsing", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Kleeman", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "959--967", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, condi- tional random field parsing. In Proceedings of ACL, pages 959-967.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Combining active learning and partial annotation for domain adaptation of a japanese dependency parser", "authors": [ { "first": "Daniel", "middle": [], "last": "Flannery", "suffix": "" }, { "first": "Shinsuke", "middle": [], "last": "Mori", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 14th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Flannery and Shinsuke Mori. 2015. Combin- ing active learning and partial annotation for domain adaptation of a japanese dependency parser. In Pro- ceedings of the 14th International Conference on Parsing Technologies, pages 11-19.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dependency grammar induction via bitext projection constraints", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP 2009", "volume": "", "issue": "", "pages": "369--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of ACL-IJCNLP 2009, pages 369-377.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Supervised grammar induction using training data with limited constituent information", "authors": [ { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent informa- tion. In Proceedings of ACL, pages 73-79.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dependency parsing and projection based on word-pair classification", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": ",", "middle": [], "last": "", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "897--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, , and Qun Liu. 2010. Dependency pars- ing and projection based on word-pair classification. In ACL, pages 897-904.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discriminative learning with natural annotations: Word segmentation as a case study", "authors": [ { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "L\u00fc", "suffix": "" }, { "first": "Yating", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "761--769", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbin Jiang, Meng Sun, Yajuan L\u00fc, Yating Yang, and Qun Liu. 2013. Discriminative learning with natural annotations: Word segmentation as a case study. In Proceedings of ACL, pages 761-769.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient thirdorder dependency parsers", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In ACL, pages 1-11.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Active learning for Chinese word segmentation", "authors": [ { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012: Posters", "volume": "", "issue": "", "pages": "683--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shoushan Li, Guodong Zhou, and Chu-Ren Huang. 2012. Active learning for Chinese word segmen- tation. In Proceedings of COLING 2012: Posters, pages 683-692.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Soft cross-lingual syntax projection for dependency parsing", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "COLING", "volume": "", "issue": "", "pages": "783--793", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Soft cross-lingual syntax projection for dependency parsing. In COLING, pages 783-793.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Active learning for dependency parsing with partial annotation", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhanyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Min Zhang, Yue Zhang, Zhanyi Liu, Wenliang Chen, Hua Wu, and Haifeng Wang. 2016. Active learning for dependency parsing with partial annotation. In ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Domain adaptation for CRF-based Chinese word segmentation using free annotations", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "864--874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for CRF-based Chinese word segmentation using free annotations. In Proceedings of EMNLP, pages 864-874.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Neural probabilistic model for non-projective mst parsing", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2017. Neural proba- bilistic model for non-projective mst parsing. In arxiv:https://arxiv.org/abs/1701.00874.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Active learning for dependency parsing by a committee of parsers", "authors": [ { "first": "Saeed", "middle": [], "last": "Majidi", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Crane", "suffix": "" } ], "year": 2013, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "98--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saeed Majidi and Gregory Crane. 2013. Active learn- ing for dependency parsing by a committee of parsers. In Proceedings of IWPT, pages 98-105.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An experimental comparison of active learning strategies for partially labeled sequences", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Arti\u00e8res", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "898--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Thierry Arti\u00e8res. 2014. An experimental comparison of active learning strate- gies for partially labeled sequences. In Proceedings of EMNLP, pages 898-906.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Turning on the turbo: Fast third-order nonprojective turbo parsers", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "617--622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Proceedings of ACL, pages 617-622.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "91--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of ACL, pages 91-98.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In Proceedings of EACL, pages 81-88.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parse imputation for dependency annotations", "authors": [ { "first": "Jason", "middle": [], "last": "Mielens", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL-IJCNLP", "volume": "", "issue": "", "pages": "1385--1394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Mielens, Liang Sun, and Jason Baldridge. 2015. Parse imputation for dependency annotations. In Proceedings of ACL-IJCNLP, pages 1385-1394.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Active learning for dependency parsing using partially annotated sentences", "authors": [ { "first": "Abolghasem", "middle": [], "last": "Seyed", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Mirroshandel", "suffix": "" }, { "first": "", "middle": [], "last": "Nasr", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 12th International Conference on Parsing Technologies", "volume": "", "issue": "", "pages": "140--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Abolghasem Mirroshandel and Alexis Nasr. 2011. Active learning for dependency parsing us- ing partially annotated sentences. In Proceedings of the 12th International Conference on Parsing Tech- nologies, pages 140-149.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An efficient algorithm for projective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "149--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2003. An efficient algorithm for projec- tive dependency parsing. In Proceedings of IWPT, pages 149-160.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Constrained arc-eager dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2014, "venue": "Computational Linguistics", "volume": "40", "issue": "", "pages": "249--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Yoav Goldberg, and Ryan McDonald. 2014. Constrained arc-eager dependency parsing. In Computational Linguistics, volume 40, pages 249-258.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Insideoutside reestimation from partially bracketed corpora", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Workshop on Speech and Natural Language (HLT)", "volume": "", "issue": "", "pages": "122--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed cor- pora. In Proceedings of the Workshop on Speech and Natural Language (HLT), pages 122-127.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improved inference for unlexicalized parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Proceedings of NAACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Tracy", "middle": [ "H" ], "last": "King", "suffix": "" }, { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Crouch", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Maxwell", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative es- timation techniques. In Proceedings of ACL, pages 271-278.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Using smaller constituents rather than sentences in active learning for japanese dependency parsing", "authors": [ { "first": "Manabu", "middle": [], "last": "Sassano", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "356--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manabu Sassano and Sadao Kurohashi. 2010. Using smaller constituents rather than sentences in active learning for japanese dependency parsing. In Pro- ceedings of ACL, pages 356-365.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Data-driven dependency parsing of new languages using incomplete and noisy training data", "authors": [ { "first": "Kathrin", "middle": [], "last": "Spreyer", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2009, "venue": "CoNLL", "volume": "", "issue": "", "pages": "12--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kathrin Spreyer and Jonas Kuhn. 2009. Data-driven dependency parsing of new languages using incom- plete and noisy training data. In CoNLL, pages 12- 20.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Target language adaptation of discriminative transfer parsers", "authors": [ { "first": "Oscar", "middle": [], "last": "T\u00e4ckstr\u00f6m", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "1061--1071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL, pages 1061-1071.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statis- tical dependency analysis with support vector ma- chines. In Proceedings of IWPT, pages 195-206.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Semi-supervised Chinese word segmentation using partial-label learning with conditional random fields", "authors": [ { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Vozila", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "90--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan Yang and Paul Vozila. 2014. Semi-supervised Chi- nese word segmentation using partial-label learning with conditional random fields. In Proceedings of EMNLP, pages 90-98.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Syntactic processing using the generalized perceptron and beam search", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "1", "pages": "105--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic pro- cessing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Transition-based dependency parsing with rich non-local features", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188-193.", "links": null } }, "ref_entries": { "TABREF1": { "text": "Data Statistics. FA is always used for train-1K, whereas PA is simulated for train-39K.", "html": null, "content": "", "type_str": "table", "num": null }, "TABREF3": { "text": "UAS of different parsers trained on all training data (40K) Biaffine 94.37 93.06 (-1.31) 92.10 (-2.27) 92.84 (-1.53) 91.92 (-2.45) 93.63 (-0.74) 92.83 (-1.54) 93.58 (-0.79) LLGPar 93.16 91.93 (-1.23) 91.15 (-2.01) 92.39 (-0.77) 91.66 (-1.50) 93.02 (-0.14) 92.44 (-0.72) 92.83 (-0.33) LGPar 93.00 91.76 (-1.24) 90.80 (-2.20) 91.63 (-1.37) 90.62 (-2.38) 92.46 (-0.54) 91.64 (-1.36) 92.42 (-0.58) GN3Par 93.32 91.99 (-1.33) 91.17 (-2.15) 91.43 (-1.89) 90.34 (-2.98) 92.40 (-0.92) 91.80 (-1.52) 92.60 (-0.72) LTPar 92.77 91.22 (-1.55) 90.35 (-2.42) 91.12 (-1.65) 90.12 (-2.65) 91.35 (-1.42) 90.99 (-1.78) 91.04 (-1.73)", "html": null, "content": "
FA(random)PA(random)PA(uncertain)PA(divergence)
100% 30%15%30%15%30%15%21.33%
", "type_str": "table", "num": null }, "TABREF4": { "text": "+5.02) 89.79 (+2.71) 96.78 (+9.70) 93.47 (+6.39) 96.76 (+9.68) LLGPar-1K 86.67 92.65 (+5.98) 90.02 (+3.35) 97.43 (+10.76) 94.43 (+7.76) 97.07 (+10.40) LGPar-1K 86.05 92.16 (+6.11) 89.48 (+3.43) 97.30 (+11.25) 94.11 (+8.06) 96.99 (+10.94) GN3Par-1K 85.86 92.34 (+6.48) 89.54 (+3.68) 97.02 (+11.16) 93.69 (+7.83) 96.56 (+10.70)", "html": null, "content": "
Parser for completionNo constraints 0%30%PA (random) 15%30%PA (uncertain) 15%PA (divergence) 21.33%
Biaffine-1K 92.10 (LTPar-1K 87.08 85.38 91.76 (+6.38) 88.89 (+3.51) 96.90 (+11.52) 93.35 (+7.97) 96.72 (+11.34)
LLGPar-1K+39K-95.5593.3798.3096.2297.69
Biaffine-1K+39K-95.7793.5298.2796.1797.73
", "type_str": "table", "num": null }, "TABREF5": { "text": "UAS of full trees in train-39K completed via constrained decoding.", "html": null, "content": "", "type_str": "table", "num": null } } } }