{ "paper_id": "I11-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:31:48.775130Z" }, "title": "Improving Dependency Parsing with Fined-Grained Features", "authors": [ { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "gyzhou@nlpr.ia.ac.cn" }, { "first": "Li", "middle": [], "last": "Cai", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "lcai@nlpr.ia.ac.cn" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "kliu@nlpr.ia.ac.cn" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "jzhao@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present a simple and effective fine-grained feature generation scheme for dependency parsing. We focus on the problem of grammar representation, introducing fine-grained features by splitting various POS tags to different degrees using HowNet hierarchical semantic knowledge. To prevent the oversplitting, we adopt a threshold-constrained bottomup strategy to merge the derived subcategories. We conduct the experiments on the Penn Chinese Treebank. The results show that, with the fine-grained features, we can improve the dependency parsing accuracies by 0.52% (absolute) for the unlabeled first-order parser, and in the case of second-order parser, we can improve the dependency parsing accuracies by 0.61% (absolute).", "pdf_parse": { "paper_id": "I11-1026", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present a simple and effective fine-grained feature generation scheme for dependency parsing. We focus on the problem of grammar representation, introducing fine-grained features by splitting various POS tags to different degrees using HowNet hierarchical semantic knowledge. To prevent the oversplitting, we adopt a threshold-constrained bottomup strategy to merge the derived subcategories. We conduct the experiments on the Penn Chinese Treebank. The results show that, with the fine-grained features, we can improve the dependency parsing accuracies by 0.52% (absolute) for the unlabeled first-order parser, and in the case of second-order parser, we can improve the dependency parsing accuracies by 0.61% (absolute).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In natural language parsing, part-of-speech (POS) information is seen as crucial to resolving ambiguous relationships, yet POS tags are usually too general to encapsulate a word's syntactic behavior. It is therefore attractive to consider intermediate entities which exist at a finer level than the POS tags, and the relationship between specific words and their syntactic contexts may be best modeled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce the fine-grained features by splitting various POS tags to different degrees. First, we split the POS tags of each word in the Treebank using HowNet hypernym-hyponymy hierarchical semantic knowledge (Dong and Dong, 2000) .", "cite_spans": [ { "start": 227, "end": 248, "text": "(Dong and Dong, 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Then we adopt a thresholdconstrained bottom-up strategy to merge the semantic-related subcategories which are plagued by the oversplitting problems. Finally, we use Figure 1 : An example of a labeled dependency tree. The tree contains a special token \"$\" which is always the root of the tree. Each arc is directed from head to modifier and has a label describing the function of the attachment.", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the generated sub-categories to construct a new fine-grained feature mapping for a discriminative learner. We are thus relying on the ability of discriminative learning methods to identify and exploit informative features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To demonstrate the effectiveness of our approach, we conduct the dependency parsing experiments on the Penn Chinese Treebank (CTB) (Xue et al., 2005) . The results show that, with the fine-grained features, we can obtain mildly significant improvements both for first-order and secondorder parsing (e.g., the absolute improvements are 0.52% and 0.61%, respectively) (see Section 6).", "cite_spans": [ { "start": 131, "end": 149, "text": "(Xue et al., 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. Section 2 introduces the Motivation. Section 3 gives background on dependency parsing and HowNet hierarchical semantic knowledge. Section 4 describes the fine-grained feature generation scheme. Section 5 presents fine-grained features. Experimental evaluation and results are reported in Section 6. Section 7 discusses related work. Finally, in Section 8 we draw conclusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In dependency parsing, we attempt to build headmodifier (or head-dependent) relations between words in a sentence. A simple example is shown in Figure 1 , where NN, VV, and JJ are POS tags.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "Currently, a variety of statistical methods have been developed for dependency parsing, such as graph-based (McDonald et al., 2005; McDonald and Pereira, 2006) , transition-based (Yamada and Matsumoto, 2003; Hall et al., 2006) , or hybrid methods (Nivre and McDonald, 2008; Martins et al., 2008; Zhang and Clark, 2008) . These methods mainly rely on the POS information as important features, but the POS tags are usually too general to encapsulate a word's syntactic behavior, especially for Chinese dependency parsing on CTB (e.g., it assumes that all the words with the POS tag NN share the same syntactic behavior). In the limit, each word may well have its own unique syntactic behavior (Petrov and Klein, 2006) . However, in practice, given limited data, the relationships between the specific words and their context dependencies may be best modeled at a level finer than the POS tags but coarser than the words themselves. Take the sentence in Figure 1 for example, although the words \u5916\u8d44(foreign capital) and \u589e\u957f\u70b9(growth) have the same POS tag NN, they should have different context dependencies in dependency parsing tree. In HowNet, the two words are defined with different hypernyms. The word \u5916\u8d44(foreign capital) is defined as a kind of objective things, while the word \u589e\u957f\u70b9(growth) is defined as an event role feature. Intuitively, the different senses can represent their different syntactic behavior, and we attempt to split the POS tags to different degrees based on hierarchical semantic knowledge. Figure 2 shows the number of the most frequent errors relative to POS types on the development set for the first-order parsing. From the figure, it is seen that the main errors are nominal and verbal categories. Therefore, we may suspect that whether the complex and frequent categories like NN and VV should be split heavily while barely split rare or simple ones. Our experiments demonstrate that this strategy can be quite effective in Chinese dependency parsing task (see Table 2 in Section 4 for empirical results).", "cite_spans": [ { "start": 108, "end": 131, "text": "(McDonald et al., 2005;", "ref_id": "BIBREF18" }, { "start": 132, "end": 159, "text": "McDonald and Pereira, 2006)", "ref_id": "BIBREF17" }, { "start": 179, "end": 207, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF26" }, { "start": 208, "end": 226, "text": "Hall et al., 2006)", "ref_id": "BIBREF9" }, { "start": 247, "end": 273, "text": "(Nivre and McDonald, 2008;", "ref_id": null }, { "start": 274, "end": 295, "text": "Martins et al., 2008;", "ref_id": "BIBREF16" }, { "start": 296, "end": 318, "text": "Zhang and Clark, 2008)", "ref_id": "BIBREF28" }, { "start": 692, "end": 716, "text": "(Petrov and Klein, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 952, "end": 960, "text": "Figure 1", "ref_id": null }, { "start": 1513, "end": 1521, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1989, "end": 1996, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "In dependency parsing, we attempt to build headmodifier (or head-dependent) relations between words in a sentence. The discriminative parser we used in this paper is based on the part-factored model and features of the MSTParser (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007 ). The parsing model can be defined as a conditional distribution p(y|x; w) over each projective parse tree y for a particular sentence x, parameterized by a vector w. The probability of a parse tree is", "cite_spans": [ { "start": 229, "end": 252, "text": "(McDonald et al., 2005;", "ref_id": "BIBREF18" }, { "start": 253, "end": 280, "text": "McDonald and Pereira, 2006;", "ref_id": "BIBREF17" }, { "start": 281, "end": 295, "text": "Carreras, 2007", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "p(y|x; w) = 1 Z(x; w) exp { \u2211 \u03c1\u2208y w\u2022\u03a6(x, \u03c1) } (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "where Z(x; w) is the partition function and \u03a6 are part-factored feature functions that include headmodifier parts, sibling parts and grandchild parts. Given the training set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "{(x i , y i )} N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": ", parameter estimation for log-linear models generally resolve around optimization of a regularized conditional log-likelihood objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "w * = arg min w L(w) where L(w) = \u2212C N \u2211 i=1 logp(y i |x i ; w) + 1 2 ||w|| 2 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "The parameter C > 0 is a constant dictating the level of regularization in the model. Since objective function L(w) is smooth and convex, which is convenient for standard gradient-based optimization techniques. In this paper we use the dual exponentiated gradient (EG) 1 descent, which is a particularly effective optimization algorithm for log-linear models .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Parsing", "sec_num": "3.1" }, { "text": "HowNet is a bilingual general knowledge-base describing relations between concepts and relations between the attributes of concepts in Chinese and their English equivalents (Gan and Wong, 2000) . HowNet constructs a hierarchical structure of its knowledge base from hypernym-hyponymy relations. The unit of meaning is called sememe that can not be further decomposed, which can be represented in Chinese and their English equivalents, such as the sememe fund|\u8d44 \u8d44 \u8d44 \u91d1 \u91d1 \u91d1. The explicated relations of HowNet include hypernym-hyponymy, synonymy, metonymy, antonymy, part-whole, attribute-host, materialproduct, dynamic role and concept co-occurrence, and so on. In this paper, we only consider the hypernym-hyponymy relations at different levels of granularities. Since a word may have different senses, and therefore different definitions in HowNet, we just use the first definition as the semantic-related tag of the word. Take the concept \u5916 \u8d44(foreign capital) for example, its definition and the hypernym-hyponymy relations are listed below from speciality to generality, which we call the hierarchical semantic information in this paper. Definition: DEF = {fund|\u8d44\u91d1:modifier= {foreign|\u5916 \u56fd}} Hierarchy:", "cite_spans": [ { "start": 173, "end": 193, "text": "(Gan and Wong, 2000)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "fund|\u8d44 \u91d1\u2192wealth|\u94b1 \u8d22\u2192artifact|\u4eba \u5de5 \u7269\u2192inanimate|\u65e0 \u751f \u7269\u2192physical|\u7269 \u8d28\u2192thing|\u4e07\u7269\u2192entity|\u5b9e\u4f53", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "In the definition, HowNet decomposes the concept into sememes 'fund|\u8d44 \u91d1', 'wealth|\u94b1 \u8d22', 'artifact|\u4eba \u5de5 \u7269', 'inanimate|\u65e0 \u751f \u7269', 'physical|\u7269\u8d28', 'thing|\u4e07\u7269', 'entity|\u5b9e\u4f53'. The sememe appearing in the first position of Definition ('fund|\u8d44 \u91d1') is the categorical attribute, which names the hypernym of the concept \u5916 \u8d44(foreign capital). Those sememes appearing in other positions (e.g., 'foreign|\u5916\u56fd') are additional attributes, which give more specific information to the concept.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "It is clear that the word \u5916\u8d44(foreign capital) has hypernyms from the most special hypernym fund|\u8d44\u91d1 to the most general hypernym entity|\u5b9e \u4f53 in a hierarchical way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "HowNet contains very limited words, so there are many words which cannot be found in HowNet. In this paper, we extend HowNet with Chinese Knowledge base \"TongYiCiLin\" (abbreviation: CiLin) (Mei et al., 1983) , which represents 77,343 words in a dendrogram (or tree).", "cite_spans": [ { "start": 189, "end": 207, "text": "(Mei et al., 1983)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "CiLin is organized as a hierarchical tree structure, each node represents a semantic category. To balance the words coverage, we extract semantic categories at level 3, which covers 1,400 subcate-gories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "HowNet and CiLin have different ontologies and representations of semantic categories (Xiong et al., 2005) , we combine the two dictionaries: given a word w, if we cannot find in HowNet, but found in CiLin, we try to replace w with a synonym s in the synset defined by CiLin. If the synonym s can be found in HowNet, the corresponding semantic-related tag in HowNet will be assigned to w.", "cite_spans": [ { "start": 86, "end": 106, "text": "(Xiong et al., 2005)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "4 Fine-Grained Feature Generation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "HowNet Semantic Knowledge", "sec_num": "3.2" }, { "text": "In this subsection, we split the original POS tags to different degrees based on HowNet hierarchical semantic knowledge. The challenge is how to deal with the problem of polysemous words. Since each word may have multiple senses, and therefore different definitions in HowNet. Following Xiong et al. (2005) and Lin et al. (2009) , we just use the first sense to determine the sense of each token instance of a target word (e.g., all token instances of a given word are tagged with the sense that occurs most frequently in HowNet).", "cite_spans": [ { "start": 287, "end": 306, "text": "Xiong et al. (2005)", "ref_id": "BIBREF24" }, { "start": 311, "end": 328, "text": "Lin et al. (2009)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Splitting the POS Tags", "sec_num": "4.1" }, { "text": "As mentioned in Section 3.2, the semantic information of each word can be represented as hierarchical hypernym-hyponymy relations. In this paper, we attempt to establish the mapping from top to down and split the words into different subcategories based on hypernym-hyponymy relations defined in HowNet. For easy explanation of the splitting process, we take the words with POS tag NN for example; the fine-grained feature generation is shown in Figure 3 . The left part of the figure is the word subcategories, which is split based on HowNet hierarchy. As shown by the dashed line from left to right, we generate each subcategory with the hierarchical semantic-related tag, such as NN-event, NN-entity, NN-thing, NN-time and so on. If the hypernym node has no hyponym, the corresponding subcategory will stop splitting (e.g., at the level 3 in figure 3, \"fruit\" is the most speciality hypernym of the corresponding words \"banana\" and \"apple\" in HowNet hierarchy, which cannot be further decomposed). The details of HowNet hierarchy were presented in Dong and Dong (2000) .", "cite_spans": [ { "start": 1051, "end": 1071, "text": "Dong and Dong (2000)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 446, "end": 454, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Splitting the POS Tags", "sec_num": "4.1" }, { "text": "As shown in Figure 3 , the original relationships between the words and their syntactic contexts are modeled by the POS tag NN, after hierarchically split, the relationships can be best modeled at the different levels of the fine-grained subcategories. By this observation, the fine-grained feature generation is just a hierarchical clustering of words themselves with the fine-grained semantic-related tags. Unlike the previous work, such as word cluster technique , data-driven split manner (Matsuzaki et al., 2005; Petrov and Klein, 2006) , our approach does not exploit unlabeled data, and the splitting is based on hierarchical semantic knowledge instead of maximizing posterior probability, which is much simpler than their methods.", "cite_spans": [ { "start": 493, "end": 517, "text": "(Matsuzaki et al., 2005;", "ref_id": "BIBREF15" }, { "start": 518, "end": 541, "text": "Petrov and Klein, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Splitting the POS Tags", "sec_num": "4.1" }, { "text": "Intuitively, creating more subcategories can increase parsing accuracy. On the other hand, oversplitting can be a serious problem, the details were presented in Klein and Manning (2003) . To prevent oversplitting, we merge the subcategories based on the threshold constraint. After the splitting, each subcategory contains a group of words which share the same semantic-related tag.", "cite_spans": [ { "start": 161, "end": 185, "text": "Klein and Manning (2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "Then we measure the size of each subcategory to determine whether the subcategory should be further merged. For easy explanation, we show an example in Figure 4 , where each node C i denotes a subcategory, C j is the nearest hypernym subcategory of C i , C k is the nearest hypernym subcategory of C j , and so on. Assuming that f (C i ) , f (C j ) and f (C j ) denote the number of the words contained in the subcategory C i , C j and C k respectively, f is the threshold. We judge C i should be further merged into C j if f (C i ) < f , and update the number of the words contained in C j using the following formula:", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "f update (C j ) = { f (C i ) + f (C j ) if f (C i ) < f f (C j ) other (3) where f update (C j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "is the number of the words contained in the updated subcategory C j . In this way, we repeatedly merge each subcategory from bottom-up through the hypernym ladders according to the formula (3). Finally, we generate appropriate granularity of the fine-grained subcategories by splitting and merging approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "In our approach, each POS tag is divided into several subcategories. The subcategories of some POS tags with the words are shown in Table 1 . The categories compose of the original POS tags and the subcategories derived from HowNet. For example, NN is split into NN-InstitutePlace, NN-aValue, and so on. The subcategories' number of each POS tag is shown in Table 2 : The number of subcategories generated by our hierarchical semantic knowledge based split-merge procedure.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 139, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 358, "end": 365, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "ample, common noun (NN) category is divided into the maximum number of subcategories (24). One subcategory consists primarily of objective things, whose typical semantic knowledge is an entity. Another subcategory is defined as an attribute, and so on. These kinds of semantic-related subcategories are typical, and give a division similar to the distributional clustering results like those of Schuetze (1998) . The proper noun (NR) category is split into the 5 subcategories, including entity, institute-Places, attribute, aValue, and so on, which are defined in HowNet. The temporal noun (NT) category is also split into 3 subcategories.", "cite_spans": [ { "start": 395, "end": 410, "text": "Schuetze (1998)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "Verbal categories are also heavily split. Verbal subcategories sometimes reflect syntactic selectional preferences, and sometimes reflect other aspects of verbal syntax (Petrov and Klein, 2006) . For example, the common verb (VV) category is divided into the number of 17 subcategories based on hierarchical split-merge procedure. The predictive adjective (VA) category is also split into 4 subcategories.", "cite_spans": [ { "start": 169, "end": 193, "text": "(Petrov and Klein, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "Functional categories generally have fewer splits shown in Table 2 . Intuitively, those categories are known to be strongly correlated with syntactic behavior. For example, determiner (DT), interjection (IJ), onomatopoeia (ON), and so on.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Merging Based on Threshold Constraint", "sec_num": "4.2" }, { "text": "Key to the success of our approach is the use of HowNet hierarchical semantic knowledge to generate the fine-grained features to assist the dependency parsers. The feature sets we used in this paper are similar to other feature sets in the literature (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007) , so we will not attempt to give an exhaustive description of the features in this Section. Rather, we describe our finegrained features at a high level and concentrate on our motivations. In the experiments, our employed two different feature sets: a baseline feature set which draws upon \"normal\" information sources such as word forms and POS, and a fine-grained feature set that also information derived from the HowNet hierarchical semantic knowledge.", "cite_spans": [ { "start": 251, "end": 274, "text": "(McDonald et al., 2005;", "ref_id": "BIBREF18" }, { "start": 275, "end": 302, "text": "McDonald and Pereira, 2006;", "ref_id": "BIBREF17" }, { "start": 303, "end": 318, "text": "Carreras, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "Our first-order baseline feature set is similar to the feature set of McDonald et al. (2005) and McDonald and Pereira (2006) . The second-order baseline features are the same as those of Carreras (2007) and include indicators for triples of POS tags for sibling interactions and grandparent interactions, as well as additional bigram features based on pairs of words involved these higher order interactions.", "cite_spans": [ { "start": 70, "end": 92, "text": "McDonald et al. (2005)", "ref_id": "BIBREF18" }, { "start": 97, "end": 124, "text": "McDonald and Pereira (2006)", "ref_id": "BIBREF17" }, { "start": 187, "end": 202, "text": "Carreras (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "The first-and second-order fine-grained features are complementary with the baseline features. We generate the fine-grained features by mimicking the word-to-tag and tag-to-tag interactions between the head and modifier of a dependency. Also, We include indicators for triples of fine-grained subcategory tags for sibling and grandparent interactions. Examples of these features are provided in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 395, "end": 402, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "Till now, we have demonstrated our finegrained generation scheme using HowNet hierarchical semantic knowledge. With the derived subcategories, we can construct a new fine-grained feature mapping for a discriminative learner, similar to . We are relying on the ability of discriminative learning methods to identify and exploit informative features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "In order to evaluate the effectiveness of the proposed approach, we conducted dependency parsing experiments in Chinese. The experiments were performed on the Penn Chinese Treebank (CTB) version 5.0 (Xue et al., 2005) , using a set of head-selection rules (Zhang and Clark, 2008) to convert the phrase structure syntax of the Treebank to a dependency tree representation, dependency labels were obtained via the \"Malt\" hardcoded setting. 2 We split the data into training set (files 1-270 and files 400-931), development set (files 301-325) and test set (files 271-300). The development and test set were used gold-standard segmentation and POS tags in CTB.", "cite_spans": [ { "start": 199, "end": 217, "text": "(Xue et al., 2005)", "ref_id": "BIBREF25" }, { "start": 256, "end": 279, "text": "(Zhang and Clark, 2008)", "ref_id": "BIBREF28" }, { "start": 438, "end": 439, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We measured the parser quality by the unlabeled attachment score (UAS), e.g., the percentage of tokens (excluding all punctuation tokens) with the correct HEAD. And we also evaluated on complete dependency analysis (CM).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "In this subsection, we conduct the experiments only using the splitting operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Splitting Experiments", "sec_num": "6.1" }, { "text": "The results are shown in Table 4 , where Ord1/Ord2 refers to a first-/second-order parsers (Mc-Donald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007) with baseline features. Ord1f/Ord2f refers to a first-/second-order parsers with baseline+fine-grained features, and the im- Table 4 : Dependency parsing results on the test set only using the splitting operator.", "cite_spans": [ { "start": 91, "end": 115, "text": "(Mc-Donald et al., 2005;", "ref_id": null }, { "start": 116, "end": 143, "text": "McDonald and Pereira, 2006;", "ref_id": "BIBREF17" }, { "start": 144, "end": 159, "text": "Carreras, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 4", "ref_id": null }, { "start": 285, "end": 292, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Splitting Experiments", "sec_num": "6.1" }, { "text": "provements by the fine-grained features over the baseline features are shown in parentheses. There are some clear trends in the results. First, the performance increases with the order of the parser: the first-order model (Ord1) has the lowest performance, adding sibling and grandparent interactions (Ord2) yield better performance. Similar observations regarding the effect of model order have also been made by Carreras (2007) and .", "cite_spans": [ { "start": 414, "end": 429, "text": "Carreras (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Splitting Experiments", "sec_num": "6.1" }, { "text": "Second, note that the parsers using the finegrained features outperform the baseline, regardless of model order. Moreover, the benefits of the fine-grained features can improve the performance with the increasing of the model order. For example, increasing the model order from Ord1 to Ord1f results in a relative reduction in error of roughly 1.12%, while introducing fine-grained features from Ord2 to Ord2f yields an additional relative error reduction of roughly 2.05%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Splitting Experiments", "sec_num": "6.1" }, { "text": "To prevent oversplitting, we merge the subcategories based on the threshold constraint. For parameter f in equation 3, different POS tags (e.g., NN, VV, JJ, ON, \u2022 \u2022 \u2022 ) need different values. We do the experiments on the development set to determine the best value among 10, 20, 50, 100, 200, 300, \u2022 \u2022 \u2022 , 1,000 in terms of UAS for each POS tag. The number of the subcategories are shown in Table 2 (in Section 4). Our experiments corresponding to the best parameter values are evaluated on the test set of CTB 5.0. Table 5 shows the results. The performances can be further increased after using the merging operator. Such a fact validates the effectiveness of merging operator. Overall, for the first-order parser, we find that there is an absolute improvement of 0.52 points (UAS) by adding fine-grained features. For the second-order parser, we get an absolute improvement of 0.61 points (UAS) by including fine-grained features. The improvements of parsing with fine-grained features are mildly significant using the Z-test of Collins et al. (2005) . Wang et al. (2007) 86.6 -Yu at al. 2008-87.26 Zhao et al. (2009) 88.9 87.0 Chen et al. (2009) 92.34 89.91 Ours 90.86 88.88 Table 6 : Dependency parsing results on this data set for our second-order model and the previous work.", "cite_spans": [ { "start": 1032, "end": 1053, "text": "Collins et al. (2005)", "ref_id": "BIBREF5" }, { "start": 1056, "end": 1074, "text": "Wang et al. (2007)", "ref_id": "BIBREF23" }, { "start": 1102, "end": 1120, "text": "Zhao et al. (2009)", "ref_id": "BIBREF29" }, { "start": 1131, "end": 1149, "text": "Chen et al. (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 516, "end": 523, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 1179, "end": 1186, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Merging Experiments", "sec_num": "6.2" }, { "text": "To put our results in perspective, we also compare our second-order system with other best systems: Wang et al. (2007) , Yu at al. (2008) , Zhao et al. (2009) and Chen et al. (2009) , respectively. The results are shown in Table 6 , our approach outperforms the first three systems. Chen et al. (2009) reports a very high performance using subtree features from auto-parsed data. In our systems, our do not use such knowledge. Some researchers conducted experiments on CTB with a different data split: files 1-815 and files 1001-1136 for training, files 816-885 and files 1137-1147 for test, files 886-931 and 1148-1151 for development. The development and test sets were also performed using the gold-standard assigned POS tags. We report the experimental results as well as the performance of previous work on this data set shown in Table 7 . Our results are better than most previous work, although Zhang and Clark (2008) achieved an even higher accuracy (86.21) by combining both graph-based and transition-based parsing into a single system for training and decoding. Moreover, their technique is orthogonal to ours, and we suspect that integrating the fine-grained features into the combined parsers might get an even better performance.", "cite_spans": [ { "start": 100, "end": 118, "text": "Wang et al. (2007)", "ref_id": "BIBREF23" }, { "start": 121, "end": 137, "text": "Yu at al. (2008)", "ref_id": "BIBREF27" }, { "start": 140, "end": 158, "text": "Zhao et al. (2009)", "ref_id": "BIBREF29" }, { "start": 163, "end": 181, "text": "Chen et al. (2009)", "ref_id": "BIBREF3" }, { "start": 283, "end": 301, "text": "Chen et al. (2009)", "ref_id": "BIBREF3" }, { "start": 902, "end": 924, "text": "Zhang and Clark (2008)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 6", "ref_id": null }, { "start": 835, "end": 842, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Comparison with Previous Work", "sec_num": "6.3" }, { "text": "Our purpose in this paper is to incorporate the finegrained features to assist the dependency parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "Systems UAS Duan et al. (2007) 84.38 Zhang and Clark (2008) 86.21 Huang and Sagae (2010) 85.20 Ours 85.45 Figure 5 shows an example of dependency trees produced by the baseline parser and our proposed approach. In Figure 5(a) , the baseline parser incorrectly assigned \u5956/NN (prize) as the modifier of \u4ee5/P (with) and the head of \u4ee5/P was also incorrectly recognized as \u6388\u4e88/VV (award). The reason may be that the POS features (P\u2192NN and VV\u2192P) are too general to model the syntactic dependencies. However, after introducing the finegrained features P-event\u2192NN-attribute and VV-AlterRelational\u2192P-event, \u540d\u5b57/NN (name) was selected as modifier of \u4ee5/P (with) and the head of \u4ee5/P (with) was correctly recognized ( Figure 5(b) ).", "cite_spans": [ { "start": 8, "end": 30, "text": "UAS Duan et al. (2007)", "ref_id": null }, { "start": 37, "end": 59, "text": "Zhang and Clark (2008)", "ref_id": "BIBREF28" }, { "start": 66, "end": 88, "text": "Huang and Sagae (2010)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 214, "end": 225, "text": "Figure 5(a)", "ref_id": "FIGREF3" }, { "start": 702, "end": 714, "text": "Figure 5(b)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "Besides, there exist a large number of neighborhood ambiguities in Chinese dependency parsing, such as \"NN NN NN\", \"JJ NN NN\", \"AD VV VV\", \"JJ NN CC NN\" and so on they have possible parsing trees as shown in Figure 6 . For those ambiguities, our approach can provide the finegrained features as additional information for the parser. For example, we have the following case in the data set: \"\u5916\u5546NN(foreign tradesman)/\u6295 \u8d44NN(investment) /\u4f01 \u4e1aNN(enterprise)/\". We can provide additional information about the relations of \"\u5916 \u5546NN-human(foreign tradesman)/ \u4f01 \u4e1aNN-InstitutePlace(enterprise)\" and \"\u5916 \u5546NN-human(foreign tradesman)/\u6295 \u8d44NNevent(investment)\", which can be used to help the parser make the correct decision. Our approach can also help the longer dependencies, such as \"JJ NN NN NN\" and \"NN NN NN NN\". For \"JJ NN1 CC NN2\" ambiguity, we can provide the additional information about the relations of JJ/NN1 and JJ/NN2. In this case, the dependency parser can correctly differentiate the ambiguity.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 216, "text": "Figure 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "Our proposed approach is only a preliminary work. Despite the success, there are still some problems which should be extensively discussed in the future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "(1) In this paper we split the POS tags by using the gold-standard POS tags of CTB. However, in many real application problems, the sentences to be parsed are often from the plain text and the POS tagging is an inevitable phase before dependency parsing. But the split of POS tags will bring great difficulty to the POS tagging phase. Whether the increase the parsing performance will cover the decrease of the POS tagging, this is a very appealing and challenging task in practice. We will leave it for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "(2) To deal with the problem of polysemous words, we just use the first definitions in HowNet. A natural avenue for further research would be the development of word sense disambiguation (WSD) technology to solve this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.4" }, { "text": "In this paper, we have focused on developing new representations for POS information. The idea of exploiting different granularities of information for dependency parsing has previously been investigated. Liu et al. (2007) subdivided verbs according to their grammatical functions and integrated the information of verb subclasses into the dependency parsing model. They regarded the verb subdividing process as a classification task. In contrast, we split the POS tags based on HowNet hierarchical semantic knowledge and relax the subdivision to be all types of POS tags, which is much simpler than the classification-based method. introduced lexical intermediaries at a coarser level than words themselves via a cluster method. Our approach is similar to theirs in that we used the fine-grained feature generation scheme based on HowNet hierarchical semantic knowledge, and the fine-grained features can be viewed as being a kind of \"back-off\" version of the baseline features. However, we focus on the problem of POS representation instead of lexical representation.", "cite_spans": [ { "start": 205, "end": 222, "text": "Liu et al. (2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Recently, there are some studies focusing on parsing task using semantic knowledge. Agirre et al. (2008) used word sense information to improve English parsing and PP attachment. Xiong et al. (2005) and Lin et al. (2009) extracted hypernym features from HowNet semantic knowledge and integrated the features into a generative model for Chinese constituent parsing. As with their work, we also use semantic knowledge for parsing. However, our gold is to employ HowNet hierarchical semantic knowledge to generate fine-grained features to dependency parsing, rather than to PCFGs, requiring a substantially different model formulation. Besides, Bansal and Klein (2011) and Zhou et al. (2011) exploited web-scale semantic information for parsing.", "cite_spans": [ { "start": 84, "end": 104, "text": "Agirre et al. (2008)", "ref_id": "BIBREF0" }, { "start": 179, "end": 198, "text": "Xiong et al. (2005)", "ref_id": "BIBREF24" }, { "start": 203, "end": 220, "text": "Lin et al. (2009)", "ref_id": "BIBREF13" }, { "start": 642, "end": 665, "text": "Bansal and Klein (2011)", "ref_id": "BIBREF1" }, { "start": 670, "end": 688, "text": "Zhou et al. (2011)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "In this paper, we focus on the problem of grammar representation, introducing fine-grained features by splitting various POS tags to different degrees using HowNet hierarchical semantic knowledge. To prevent the oversplitting, we adopt a threshold-constrained bottom-up strategy to merge the derived subcategories. The results show that, with the fine-grained features, we can improve the dependency parsing accuracies by 0.52% (absolute) for the unlabeled first-order parser, and in the case of second-order parser, we can improve the dependency parsing accuracies by 0.61% (absolute).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "http://groups.csail.mit.edu/nlp/egstra/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the National Natural Science Foundation of China (No. 60875041 and No. 61070106). We thank the anonymous reviewers for their insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Improving parsing and PP-attachment performance with sense information", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "T", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "D", "middle": [], "last": "Martinez", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "317--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Agirre, T. Baldwin, and D. Martinez. 2008. Improv- ing parsing and PP-attachment performance with sense information. In Proceedings of ACL-08: HLT, pages 317-325.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Web-Scale Features for Full-Scale Parsing", "authors": [ { "first": "M", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "693--702", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bansal and D. Klein. 2011. Web-Scale Features for Full-Scale Parsing. In Proceedings of ACL-HLT, pages 693-702.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Experiments with a higher-order projective dependency parser", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "957--961", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proceedings of EMNLP-CoNLL, pages 957-961.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Improving dependency parsing with subtrees from auto-parsed data", "authors": [ { "first": "W", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "K", "middle": [], "last": "Uchimoto", "suffix": "" }, { "first": "Torisawa", "middle": [], "last": "", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "570--579", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Chen, D. Kawahara, K. Uchimoto, and Torisawa. 2009. Improving dependency parsing with subtrees from auto-parsed data. In Proceedings of EMNLP, pages 570-579.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Exponentiated gradient algorithm for conditional random fields and max-margin markov networks", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "A", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "T", "middle": [], "last": "Koo", "suffix": "" }, { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "P", "middle": [ "L" ], "last": "Bartlett", "suffix": "" } ], "year": 2008, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "1775--1822", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins, A. Globerson, T. Koo, X. Carreras, and P. L. Bartlett. 2008. Exponentiated gradient algo- rithm for conditional random fields and max-margin markov networks. Journal of Machine Learning Re- search, pages 1775-1822.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Clause restructuring for statistical machine translation", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "I", "middle": [], "last": "Kucerova", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "531--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins, P. Koehn, and I. Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531-540.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "HowNet Chinese-English conceptual database", "authors": [ { "first": "Z", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Q", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2000, "venue": "Technical report online software database, released at ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Dong and Q. Dong. 2000. HowNet Chinese- English conceptual database. Technical re- port online software database, released at ACL, http://www.keenage.com.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Probabilistic Models for action-based Chinese dependency parsing", "authors": [ { "first": "X", "middle": [], "last": "Duan", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "B", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ECML/PKDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Duan, J. Zhao, and B. Xu. 2007. Probabilistic Mod- els for action-based Chinese dependency parsing. In Proceedings of ECML/PKDD.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Annotating information structures in Chinese texts using HowNet", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Gan", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Wong", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. W. Gan and P. W. Wong. 2000. Annotating infor- mation structures in Chinese texts using HowNet. In Proceedings of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Discriminative classifier for deterministic dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "316--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hall, J. Nivre, and J. Nilsson. 2006. Discriminative classifier for deterministic dependency parsing. In Proceedings of ACL, pages 316-323.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dynamic Programming for Linear-Time Incremental Parsing", "authors": [ { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "K", "middle": [], "last": "Sagae", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1077--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Huang and K. Sagae. 2010. Dynamic Programming for Linear-Time Incremental Parsing. In Proceed- ings of ACL, pages 1077-1086.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Klein and C. Manning. 2003. Accurate unlexical- ized parsing. In Proceedings of ACL, pages 423- 430.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Simple semi-supervised dependency parsing", "authors": [ { "first": "T", "middle": [], "last": "Koo", "suffix": "" }, { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "595--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple semi-supervised dependency parsing. In Proceed- ings of ACL-08: HLT, pages 595-603.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Refining grammars for parsing with hierarchical semantic knowledge", "authors": [ { "first": "X", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Fan", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "X", "middle": [], "last": "Wu", "suffix": "" }, { "first": "H", "middle": [], "last": "Chi", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1298--1307", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Lin, Y. Fan, M. Zhang, X. Wu, H. Chi. 2009. Refin- ing grammars for parsing with hierarchical semantic knowledge. In Proceedings of EMNLP, pages 1298- 1307.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Subdivided verbs to improve syntactic parsing", "authors": [ { "first": "T", "middle": [], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [], "last": "Ma", "suffix": "" }, { "first": "H", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" } ], "year": 2007, "venue": "Journal of electronics", "volume": "24", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Liu, J. Ma, H. Zhang, and S. Li. 2007. Subdivided verbs to improve syntactic parsing. Journal of elec- tronics, 24(3).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Probabilistic CFG with latent annotation", "authors": [ { "first": "T", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Y", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "75--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Proba- bilistic CFG with latent annotation. In Proceedings of ACL, pages 75-82.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stacking dependency parsers", "authors": [ { "first": "A", "middle": [ "F T" ], "last": "Martins", "suffix": "" }, { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "157--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. F. T. Martins, D. Das, N. A. Smith, and E. P. Xing. 2008. Stacking dependency parsers. In Proceedings of EMNLP, pages 157-166.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81-88.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "K", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "91--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005. On- line large-margin training of dependency parsers. In Proceedings of ACL, pages 91-98.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Integrating graphbased and transition-based dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "R", "middle": [], "last": "Mcdonld", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "950--958", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre and R. McDonld. 2008. Integrating graph- based and transition-based dependency parsing. In Proceedings of ACL-08: HLT, pages 950-958.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning accurate, compact, and interpretable tree annotation", "authors": [ { "first": "S", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "433--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Petrov and D. Klein. 2006. Learning accurate, com- pact, and interpretable tree annotation. In Proceed- ings of COLING-ACL, pages 433-440.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic word sense discrimination", "authors": [ { "first": "H", "middle": [], "last": "Schuetze", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "97--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Schuetze. 1998. Automatic word sense discrimina- tion. Computational Linguistics, 24(1): 97-124.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Simple training of dependency parsers via structured boosting", "authors": [ { "first": "Q", "middle": [ "I" ], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "D", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "1756--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Q. I. Wang, D. Lin, and D. Schuurmans. 2007. Simple training of dependency parsers via structured boost- ing. In Proceedings of IJCAI, pages 1756-1762.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Parsing the Penn Chinese Treebank with semantic knowledge", "authors": [ { "first": "D", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Qian", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Xiong, S. Li, Q. Liu, S. Lin, and Y. Qian. 2005. Parsing the Penn Chinese Treebank with semantic knowledge. In Proceedings of IJCNLP.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Penn Chinese Treebank: Phrase structure annotation of a large corpus", "authors": [ { "first": "N", "middle": [], "last": "Xue", "suffix": "" }, { "first": "F", "middle": [], "last": "Xia", "suffix": "" }, { "first": "F.-D", "middle": [], "last": "Chiou", "suffix": "" }, { "first": "M", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2005, "venue": "Natural Language Engineering", "volume": "10", "issue": "4", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Xue, F. Xia, F.-D. Chiou, and M. Palmer. 2005. The Penn Chinese Treebank: Phrase structure an- notation of a large corpus. Natural Language Engi- neering, 10(4):1-30.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Matsumoto", "middle": [], "last": "Yamada", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamada and Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceed- ings of IWPT, pages 195-206.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Chinese dependency parsing with large scale automatically constructed case structures", "authors": [ { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "D", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COL-ING", "volume": "", "issue": "", "pages": "1049--1056", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yu, D. Kawahara, and S. Kurohashi. 2008. Chinese dependency parsing with large scale automatically constructed case structures. In Proceedings of COL- ING, pages 1049-1056.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "562--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Zhang and S. Clark. 2008. A tale of two parsers: investigating and combining graph- based and transition-based dependency parsing us- ing beam-search. In Proceedings of EMNLP, pages 562-571.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Cross language dependency parsing using a bilingual lexicon", "authors": [ { "first": "H", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Song", "suffix": "" }, { "first": "C", "middle": [], "last": "Kit", "suffix": "" }, { "first": "G", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "55--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Zhao, Y. Song, C. Kit and G. Zhou. 2009. Cross language dependency parsing using a bilingual lexi- con. In Proceedings of ACL, pages 55-63.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Exploiting web-derived selectional preference to imporve statistical dependency parsing", "authors": [ { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "K", "middle": [], "last": "Liu", "suffix": "" }, { "first": "L", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "1556--1665", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Zhou, J. Zhao, K. Liu and L. Cai. 2011. Exploiting web-derived selectional preference to imporve sta- tistical dependency parsing. In Proceedings of ACL- HLT, pages 1556-1665.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "The number of the most frequent errors relative to POS types on development set for firstorder parsing.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Fine-grained feature generation for words with POS tag NN. The left part is word subcategory and the right part is HowNet hierarchy from generality to speciality. The merging procedure based on hypernym-hyponymy relations from bottom-up.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Dependency trees of an example sentence \"\u4ee5(with) \u540d\u5b57(name) \u547d\u540d(named) \u7684(of) \u5956(prize) \u88ab(by) \u6388 \u4e88(award)\" as its English translation \"\u2022 \u2022 \u2022 prize named by \u2022 \u2022 \u2022 name is awarded \u2022 \u2022 \u2022 \". (a) Dependency tree produced by the baseline model; (b) Dependency tree produced by the proposed approach.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "Neighborhood ambiguities in Chinese dependency parsing.", "type_str": "figure", "num": null }, "TABREF2": { "html": null, "content": "
NN-InstitutePlace NN-aValue NN-organization NN-event NN-human NN-affairs NN-mental NN-entity NN-artifactNN \u4f01\u4e1a(enterprise) \u516c\u53f8(company) \u7ecf\u6d4e(economy)\u56fd\u9645(international) \u56fd\u5bb6(country) \u653f\u5e9c(government) \u53d1\u5c55(developing)\u5408\u4f5c(cooperation) \u8bb0\u8005(reporter) \u4e13\u5bb6(expert) \u8d38\u6613(trading) \u91d1\u878d(financial) \u60c5\u7eea(mood) \u611f\u53d7(feelings) \u540e\u8005(latter) \u673a\u4f1a(opportunity) \u68c9\u82b1(cotton) \u7ef4\u751f\u7d20(vitamin)VV-event VV-aValue VV-SelfMoveInDirection VV-change VV-attribute VV-entity VV-AlterRelation VV-AlterPossession VV-AlterPhysicalVV \u731c\u5230(guess) \u9884\u89c1(foresee) \u5c0f\u5fc3(care) \u53ef\u4ee5(can) \u8fdb\u884c(conduct) \u6269\u6563(spread) \u589e\u957f(increase) \u6da8\u4ef7(deform) \u7b80\u79f0(abbreviation) \u5e93\u5bb9(storage capacity) \u7ecf\u5386(experience) \u8003\u8651(consider) \u56f4\u56f0(siege) \u8131\u79bb(separate) \u501f\u7528(borrow) \u8d2d\u8fdb(buy) \u5efa\u9020(build) \u5236\u6210(make)
\u2022 \u2022 \u2022 AD-aValue AD-event\u2022 \u2022 \u2022 \u4ee5\u540e(after) \u552f\u6709(only) AD \u8fd8(also) \u4e0d\u7ba1(no matter)\u2022 \u2022 \u2022 JJ-aValue JJ-event\u2022 \u2022 \u2022 \u5171\u540c(together) \u7279\u522b(special) JJ \u7ee7\u7eed(continue) \u76f8\u5bf9(relatively)
\u2022 \u2022 \u2022\u2022 \u2022 \u2022\u2022 \u2022 \u2022\u2022 \u2022 \u2022
", "type_str": "table", "text": "Nominal categories are the most heavily split. For ex-", "num": null }, "TABREF3": { "html": null, "content": "
NN VV NR JJ CC DEG M VA LC PN DT24 17 5 8 7 1 5 4 1 1 1VC VE ON P NT CS AD SB CD DEC AS2 1 1 4 3 3 5 1 1 1 1MSP OD DEV BA LJ LB DER SP IJ ETC PU1 1 1 1 1 1 1 1 1 1 1
", "type_str": "table", "text": "The two words with their English translations in the subcategories of some POS tags.", "num": null }, "TABREF5": { "html": null, "content": "", "type_str": "table", "text": "Baseline (left) and fine-grained (right) feature templates. Abbreviation: ht=head POS, hw= head word, hf=fine-grained POS of head, mf=fineg-grained POS of modifier. st, gt, sf, gf= likewise for sibling and grandchild.", "num": null }, "TABREF8": { "html": null, "content": "
Systems\u226440 words (UAS)Full (UAS)
", "type_str": "table", "text": "Dependency parsing results on the test set after using the merging operator.", "num": null }, "TABREF9": { "html": null, "content": "
(a)
PNNVVDECNNSBVV
(b)
PNNVVDECNNSBVV
P-eventNN-attributeVV-AlterRelational DEC NN-event SB VV-AlterRelational
", "type_str": "table", "text": "Comparison of our final results with other best-performing systems on this data set.", "num": null } } } }