{ "paper_id": "I08-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:42:16.955264Z" }, "title": "Dependency Parsing with Short Dependency Relations in Unlabeled Data", "authors": [ { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "chenwl@nict.go.jp" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Kiyotaka", "middle": [], "last": "Uchimoto", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "uchimoto@nict.go.jp" }, { "first": "Yujie", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "", "affiliation": { "laboratory": "Computational Linguistics Group", "institution": "National Institute of Information and Communications Technology", "location": { "addrLine": "3-5 Hikari-dai, Seika-cho, Soraku-gun", "postCode": "619-0289", "settlement": "Kyoto", "country": "Japan" } }, "email": "isahara@nict.go.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents an effective dependency parsing approach of incorporating short dependency information from unlabeled data. The unlabeled data is automatically parsed by a deterministic dependency parser, which can provide relatively high performance for short dependencies between words. We then train another parser which uses the information on short dependency relations extracted from the output of the first parser. Our proposed approach achieves an unlabeled attachment score of 86.52, an absolute 1.24% improvement over the baseline system on the data set of Chinese Treebank.", "pdf_parse": { "paper_id": "I08-1012", "_pdf_hash": "", "abstract": [ { "text": "This paper presents an effective dependency parsing approach of incorporating short dependency information from unlabeled data. The unlabeled data is automatically parsed by a deterministic dependency parser, which can provide relatively high performance for short dependencies between words. We then train another parser which uses the information on short dependency relations extracted from the output of the first parser. Our proposed approach achieves an unlabeled attachment score of 86.52, an absolute 1.24% improvement over the baseline system on the data set of Chinese Treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In dependency parsing, we attempt to build the dependency links between words from a sentence. Given sufficient labeled data, there are several supervised learning methods for training highperformance dependency parsers . However, current statistical dependency parsers provide worse results if the dependency length becomes longer . Here the length of a dependency from word w i and word w j is simply equal to |i \u2212 j|. Figure 1 shows the F 1 score 1 provided by a deterministic parser relative to dependency length on our testing data. From 1 precision represents the percentage of predicted arcs of length d that are correct and recall measures the percentage of gold standard arcs of length d that are correctly predicted. F1 = 2 \u00d7 precision \u00d7 recall/ (precision + recall) the figure, we find that F 1 score decreases when dependency length increases as found. We also notice that the parser provides good results for short dependencies (94.57% for dependency length = 1 and 89.40% for dependency length = 2). In this paper, short dependency refers to the dependencies whose length is 1 or 2. Labeled data is expensive, while unlabeled data can be obtained easily. In this paper, we present an approach of incorporating unlabeled data for dependency parsing. First, all the sentences in unlabeled data are parsed by a dependency parser, which can provide state-of-the-art performance. We then extract information on short dependency relations from the parsed data, because the performance for short dependencies is relatively higher than others. Finally, we train another parser by using the information as features.", "cite_spans": [ { "start": 756, "end": 776, "text": "(precision + recall)", "ref_id": null } ], "ref_spans": [ { "start": 421, "end": 429, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The proposed method can be regarded as a semisupervised learning method. Currently, most semi-supervised methods seem to do well with artificially restricted labeled data, but they are unable to outperform the best supervised baseline when more labeled data is added. In our experiments, we show that our approach significantly outperforms a state-of-the-art parser, which is trained on full labeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal in dependency parsing is to tag dependency links that show the head-modifier relations between words. A simple example is in Figure 2 , where the link between a and bird denotes that a is the dependent of the head bird.", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 142, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "I see a beautiful bird . We define that word distance of word w i and word w j is equal to |i \u2212 j|. Usually, the two words in a head-dependent relation in one sentence can be adjacent words (word distance = 1) or neighboring words (word distance = 2) in other sentences. For example, \"a\" and \"bird\" has head-dependent relation in the sentence at Figure 2 . They can also be adjacent words in the sentence \"I see a bird.\".", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "Suppose that our task is Chinese dependency parsing. Here, the string \" JJ(Specialistlevel)/ NN(working)/ NN(discussion)\" should be tagged as the solution (a) in Figure 3 . However, our current parser may choose the solution (b) in Figure 3 without any additional information. The point is how to assign the head for \" (Specialist-level)\". Is it \" (working)\" or \" (discussion)\"? As Figure 1 suggests, the current dependency parser is good at tagging the relation between adjacent words. Thus, we expect that dependencies of adjacent words can provide useful information for parsing words, whose word distances are longer. When we search the string \" (Specialistlevel)/ (discussion)\" at google.com, many relevant documents can be retrieved. If we have a good parser, we may assign the relations between the two words in the retrieved documents as Figure 4 shows. We can find that \" (discussion)\" is the head of \" (Specialist-level)\" in many cases.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 171, "text": "Figure 3", "ref_id": null }, { "start": 233, "end": 241, "text": "Figure 3", "ref_id": null }, { "start": 383, "end": 391, "text": "Figure 1", "ref_id": null }, { "start": 847, "end": 855, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03e7\u118a\u0d6b \u13b9 \u04ee \u03e7\u118a\u0d6b \u13b9 \u04ee \u03e7\u118a\u0d6b \u13b9 \u04ee (b)", "eq_num": "(a)" } ], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "1)\u2026\u14ce\u1a72\u13c8\u0944\u04625\u1cdc25\u1bb9\u10b726\u1bb9\u0412\u1710/\u03e7\u118a\u0d6b/\u04ee/,/\u202b\u078f\u202c\u0b1a/\u1e0c\u213a\u0466\u05b5\u1b91\u2026 2)\u2026\u13c8\u1b83\u0d3a\u0478\u1fac\u1177\u08b4,/\u03e7\u118a\u0d6b/\u04ee/\u1bbc\u1301\u11722004\u14482\u1cdc18\u1bb9\u13c8 \u2026 3)\u2026\u09e0\u0e52\u0fc1\u2464\u1fac\u116c\u0a2c\u1205\u03f8\u0fc1\u2464:\u0db3\u0548\u09dc\u1e9f\u1710/\u03e7\u118a\u0d6b/\u04ee/\u01c4 \u2026 n)\u2026\u1bb9\u0598\u11da\u03df\u1cdc\u14d4\u0f9f/\u03e7\u118a\u0d6b/\u04ee/\u0923\u0b1a\u11cd\u1bb9\u1e57\u228d\u0d83\u24b2L\u2026 .)\u2026\u03e7\u118a\u0d6b/\u04ee\u2026 Figure 4: Parsing \" (Specialist-level)/ (discussion)\" in unlabeled data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "Now, consider what a learning model could do to assign the appropriate relation between \" (Specialist-level)\" and \" (discussion)\" in the string \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "(Specialist-level)/ (working)/ (discussion)\". In this case, we provide additional information to \" (discussion)\" as the possible head of \" (Specialist-level)\" in the unlabeled data. In this way, the learning model may use this information to make correct decision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "Till now, we demonstrate how to use the dependency relation between adjacent words in unlabeled data to help tag the relation between two words whose word distance is 2. In the similar way, we can also assign the relation between two words whose word distance is longer by using the information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "Based on the above observations, we propose an approach of exploiting the information from a largescale unlabeled data for dependency parsing. We use a parser to parse the sentences in unlabeled data. Then another parser makes use of the information on short dependency relations in the newly parsed data to improve performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "Our study is relative to incorporating unlabeled data into a model for parsing. There are several other studies relevant to ours as described below. A simple method is self-training in which the existing model first labels unlabeled data and then the newly labeled data is then treated as hand annotated data for training a new model. But it seems that selftraining is not so effective. (Steedman et al., 2003) reports minor improvement by using self-training for syntactic parsing on small labeled data. The reason may be that errors in the original model would be amplified in the new model. (McClosky et al., 2006 ) presents a successful instance of parsing with self-training by using a re-ranker. As Figure 1 suggests, the dependency parser performs bad for parsing the words with long distances. In our approach, we choose partial reliable information which comes from short dependency relations for the dependency parser.", "cite_spans": [ { "start": 387, "end": 410, "text": "(Steedman et al., 2003)", "ref_id": "BIBREF12" }, { "start": 594, "end": 616, "text": "(McClosky et al., 2006", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 705, "end": 713, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "( Smith and Eisner, 2006) presents an approach to improve the accuracy of a dependency grammar induction models by EM from unlabeled data. They obtain consistent improvements by penalizing dependencies between two words that are farther apart in the string.", "cite_spans": [ { "start": 2, "end": 25, "text": "Smith and Eisner, 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "The study most relevant to ours is done by (Kawahara and Kurohashi, 2006) . They present an integrated probabilistic model for Japanese parsing. They also use partial information after current parser parses the sentences. Our work differs in that we consider general dependency relations while they only consider case frames. And we represent additional information as the features for learning models while they use the case frames as one component for a probabilistic model.", "cite_spans": [ { "start": 57, "end": 73, "text": "Kurohashi, 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and previous work", "sec_num": "2" }, { "text": "In this section, we describe our approach of exploiting reliable features from unlabeled data, which is parsed by a basic parser. We then train another parser based on new feature space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "3" }, { "text": "In this paper, we implement a deterministic parser based on the model described by (Nivre, 2003) . This model is simple and works very well in the shared-tasks of CoNLL2006 (Nivre et al., 2006) and CoNLL2007 . In fact, our approach can also be applied to other parsers, such as (Yamada and Matsumoto, 2003)'s parser, 's parser, and so on.", "cite_spans": [ { "start": 83, "end": 96, "text": "(Nivre, 2003)", "ref_id": "BIBREF10" }, { "start": 173, "end": 193, "text": "(Nivre et al., 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Training a basic parser", "sec_num": "3.1" }, { "text": "The parser predicts unlabeled directed dependencies between words in sentences. The algorithm (Nivre, 2003) makes a dependency parsing tree in one left-to-right pass over the input, and uses a stack to store the processed tokens. The behaviors of the parser are defined by four elementary actions (where TOP is the token on top of the stack and NEXT is the next token in the original input string):", "cite_spans": [ { "start": 94, "end": 107, "text": "(Nivre, 2003)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The parser", "sec_num": "3.1.1" }, { "text": "\u2022 Left-Arc(LA): Add an arc from NEXT to TOP; pop the stack. The first two actions mean that there is a dependency relation between TOP and NEXT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The parser", "sec_num": "3.1.1" }, { "text": "More information about the parser can be available in the paper (Nivre, 2003) . The parser uses a classifier to produce a sequence of actions for a sentence. In our experiments, we use the SVM model as the classifier. More specifically, our parser uses LIBSVM (Chang and Lin, 2001 ) with a polynomial kernel (degree = 3) and the built-in one-versus-all strategy for multi-class classification.", "cite_spans": [ { "start": 64, "end": 77, "text": "(Nivre, 2003)", "ref_id": "BIBREF10" }, { "start": 260, "end": 280, "text": "(Chang and Lin, 2001", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "The parser", "sec_num": "3.1.1" }, { "text": "We represent basic features extracted from the fields of data representation, including word and part-of-speech(POS) tags. The basic features used in our parser are listed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.1.2" }, { "text": "\u2022 The features based on words: the words of TOP and NEXT, the word of the head of TOP, the words of leftmost and rightmost dependent of TOP, and the word of the token immediately after NEXT in original input string. \u2022 The features based on POS: the POS of TOP and NEXT, the POS of the token immediately below TOP, the POS of leftmost and rightmost dependent of TOP, the POS of next three tokens after NEXT, and the POS of the token immediately before NEXT in original input string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.1.2" }, { "text": "With these basic features, we can train a state-ofthe-art supervised parser on labeled data. In the following content, we call this parser Basic Parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic features", "sec_num": "3.1.2" }, { "text": "The input of our approach is unlabeled data, which can be obtained easily. For the Basic Parser, the corpus should have part-of-speech (POS) tags. Therefore, we should assign the POS tags using a POS tagger. For Chinese sentences, we should segment the sentences into words before POS tagging. After data preprocessing, we have the word-segmented sentences with POS tags. We then use the Basic Parser to parse all sentences in unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unlabeled data preprocessing and parsing", "sec_num": "3.2" }, { "text": "The Basic Parser can provide complete dependency parsing trees for all sentences in unlabeled data. As Figure 1 shows, short dependencies are more reliable. To offer reliable information for the model, we propose the features based on short dependency relations from the newly parsed data.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Using short dependency relations as features", "sec_num": "3.3" }, { "text": "In a parsed sentence, if the dependency length of two words is 1 or 2, we add this word pair into a list named DepList and count its frequency. We consider the direction and length of the dependency. D1 refers to the pairs with dependency length 1, D2 refers to the pairs with dependency length 2, R refers to right arc, and L refers to left arc. For example, \" (specialist-level)\" and \" (discussion)\" are adjacent words in a sentence \" (We)/ (held)/ (specialist-level)/ (discussion)/ \" and have a left dependency arc assigned by the Basic Parser. We add a word pair \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting reliable information", "sec_num": "3.3.1" }, { "text": "(specialist-level)-(discussion)\" with \"D1-L\" and its frequency into the DepList.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting reliable information", "sec_num": "3.3.1" }, { "text": "According to frequency, we then group word pairs into different buckets, with a bucket ONE for frequency 1, a single bucket LOW for 2-7, a single bucket MID for 8-14, and a single bucket HIGH for 15+. We choose these threshold values via testing on development data. For example, the frequency of the pair \" (specialist-level)-(discussion)\" with \"D1-L\" is 20. Then it is grouped into the bucket \"D1-L-HIGH\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting reliable information", "sec_num": "3.3.1" }, { "text": "Here, we do not use the frequencies as the weight of the features. We derive the weights of the features by the SVM model from training data rather than approximating the weights from unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collecting reliable information", "sec_num": "3.3.1" }, { "text": "Based on the DepList, we represent new features for training or parsing current two words: TOP and NEXT. We consider word pairs from the context around TOP and NEXT, and get the buckets of the pairs in the DepList.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "First, we represent the features based on D1. We name these features D1 features. The D1 features are listed according to different word distances between TOP and NEXT as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "1. Word distance is 1: (TN0) the bucket of the word pair of TOP and NEXT, and (TN1) the bucket of the word pair of TOP and next token after NEXT. 2. Word distance is 2 or 3+: (TN0) the bucket of the word pair of TOP and NEXT, (TN1) the bucket of the word pair of TOP and next token after NEXT, and (TN 1) the bucket of the word pair of TOP and the token immediately before NEXT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "In item 2), all features are in turn combined with two sets of distances: a set for distance 2 and a single set for distances 3+. Thus, we have 8 types of D1 features, including 2 types in item 1) and 6 types in item 2). The feature is formatted as \"Position:WordDistance:PairBucket\". For example, we have the string \" (specialistlevel)/w 1 /w 2 /w 3 / (discussion)\", and \" (specialist-level)\" is TOP and \" (discussion)\" is NEXT. Thus we can have the feature \"TN0:3+:D1-L-HIGH\" for TOP and NEXT, because the word distance is 4(3+) and \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "(specialist-level)-(discussion)\" belongs to the bucket \"D1-L-HIGH\". Here, if a string belongs to two buckets, we use the most frequent bucket.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "Then, we represent the features based on D2. We name these features D2 features. The D2 features are listed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "1. Word distance is 1: (TN1) the bucket of the word pair of TOP and next token after NEXT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "2. Word distance is 2: (TN0) the bucket of the word pair of TOP and NEXT, and (TN1) the bucket of the word pair of TOP and next token after NEXT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New features", "sec_num": "3.3.2" }, { "text": "For labeled data, we used the Chinese Treebank (CTB) version 4.0 2 in our experiments. We used the same rules for conversion and created the same data split as : files 1-270 and 400-931 as training, 271-300 as testing and files 301-325 as development. We used the gold standard segmentation and POS tags in the CTB. For unlabeled data, we used the PFR corpus 3 . It includes the documents from People's Daily at 1998 (12 months). There are about 290 thousand sentences and 15 million words in the PFR corpus. To simplify, we used its segmentation. And we discarded the POS tags because PFR and CTB used different POS sets. We used the package TNT (Brants, 2000) , a very efficient statistical part-of-speech tagger, to train a POS tagger 4 on training data of the CTB.", "cite_spans": [ { "start": 647, "end": 661, "text": "(Brants, 2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We measured the quality of the parser by the unlabeled attachment score (UAS), i.e., the percentage of tokens with correct HEAD. We reported two types of scores: \"UAS without p\" is the UAS score without all punctuation tokens and \"UAS with p\" is the one with all punctuation tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In the experiments, we trained the parsers on training data and tuned the parameters on development data. In the following sessions, \"baseline\" refers to Basic Parser (the model with basic features), and \"OURS\" refers to our proposed parser (the model with all features). Table 1 shows the results of the parser with different feature sets, where \"+D1\" refers to the parser with basic features and D1 features, and \"+D2\" refers to the parser with all features(basic features, D1 features, and D2 features). From the table, we found a large improvement (1.12% for UAS without p and 1.23% for UAS with p) from adding D1 features. And D2 features provided minor improvement, 0.12% for UAS without p and 0.14% for UAS with p. This may be due to the information from dependency length 2 containing more noise. Totally, we achieved 1.24% improvement for UAS with p and 1.37% for UAS without p. The improvement is significant in one-tail paired t-test (p < 10 \u22125 ). We also attempted to discover the effect of different numbers of unlabeled sentences to use. Table 2 shows the results with different numbers of sentences. Here, we randomly chose different percentages of sentences from unlabeled data. When we used 1% sentences of unlabeled data, the parser achieved a large improvement. As we added more sentences, the parser obtained more benefit. ", "cite_spans": [], "ref_spans": [ { "start": 272, "end": 279, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental results", "sec_num": "4.1" }, { "text": "Finally, we compare our parser to the state of the art. We used the same testing data as (Wang et al., 2005) did, selecting the sentences length up to 40. Table 3 shows the results achieved by our method and other researchers (UAS with p), where Wang05 refers to (Wang et al., 2005) , Wang07 refers to , and McDonald&Pereira06 5 refers to . From the table, we found that our parser performed best. We now look at the improvement relative to dependency length as Figure 5 shows. From the figure, we found that our method provided better performance when dependency lengths are less than 13. Especially, we had improvements 2.35% for dependency length 4, 3.13% for length 5, 2.56% for length 6, and 4.90% for length 7. For longer ones, the parser can not provide stable improvement. The reason may be that shorter dependencies are often modifier of nouns such as determiners or adjectives or pronouns modifying their direct neighbors, while longer dependencies typically represent modifiers of the root or the main verb in a sentence . We did not provide new features for modifiers of the root. Figure 6 : Ambiguities", "cite_spans": [ { "start": 89, "end": 108, "text": "(Wang et al., 2005)", "ref_id": "BIBREF13" }, { "start": 263, "end": 282, "text": "(Wang et al., 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 155, "end": 162, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 462, "end": 470, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 1093, "end": 1101, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Comparison of other systems", "sec_num": "4.1.2" }, { "text": "In Chinese dependency parsing, there are many ambiguities in neighborhood, such as \"JJ NN NN\", \"AD VV VV\", \"NN NN NN\", \"JJ NN CC NN\". They have possible parsing trees as Figure 6 shows. For these ambiguities, our approach can provide additional information for the parser. For example, we have the following case in the data set: \" JJ(friendly)/ NN(corporation)/ NN(relationship)/\". We can provide additional information about the relations of \" JJ(friendly)/ NN(corporation)\" and \" JJ(friendly)/ NN(relationship)/\" in unlabeled data to help the parser make the correct decision.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Cases study in neighborhood", "sec_num": "5.2" }, { "text": "Our approach can also work for the longer constructions, such as \"JJ NN NN NN\" and \"NN NN NN NN\" in the similar way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cases study in neighborhood", "sec_num": "5.2" }, { "text": "For the construction \"JJ NN1 CC NN2\", we now do not define special features to solve the ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cases study in neighborhood", "sec_num": "5.2" }, { "text": "However, based on the current DepList, we can also provide additional information about the relations of JJ/NN1 and JJ/NN2. For example, for the string \" JJ(further)/ NN(improvement)/ CC(and)/ NN(development)/\", the parser often assigns \" (improvement)\" as the head of \" (further)\" instead of \" (development)\". There is an entry \" (further)-(development)\" in the DepList. Here, we need a coordination identifier to identify these constructions. After that, we can provide the information for the model. in an automatically generated corpus parsed by a basic parser. We then train a new parser with the information. The new parser achieves an absolute improvement of 1.24% over the state-of-the-art parser on Chinese Treebank (from 85.28% to 86.52%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cases study in neighborhood", "sec_num": "5.2" }, { "text": "There are many ways in which this research should be continued. First, feature representation needs to be improved. Here, we use a simple feature representation on short dependency relations. We may use a combined representation to use the information from long dependency relations even they are not so reliable. Second, we can try to select more accurately parsed sentences. Then we may collect more reliable information than the current one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cases study in neighborhood", "sec_num": "5.2" }, { "text": "More detailed information can be found at http://www.cis.upenn.edu/\u02dcchinese/.3 More detailed information can be found at http://www.icl.pku.edu.4 To know whether our POS tagger is good, we also tested the TNT package on the standard training and testing sets for full parsing(Wang et al., 2006). The TNT-based tagger provided 91.52% accuracy, the comparative result with(Wang et al., 2006).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This paper presents an effective approach to improve dependency parsing by using unlabeled data. We extract the information on short dependency relations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TnT-a statistical part-of-speech tagger", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 6th Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "224--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Brants. 2000. TnT-a statistical part-of-speech tagger. Proceedings of the 6th Conference on Applied Natural Language Processing, pages 224-231.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "LIBSVM: a library for support vector machines", "authors": [ { "first": "C", "middle": [ "C" ], "last": "Chang", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Lin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "80", "issue": "", "pages": "604--611", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.C. Chang and C.J. Lin. 2001. LIBSVM: a library for support vector machines. Software available at http://www. csie. ntu. edu. tw/cjlin/libsvm, 80:604- 611.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Single malt or blended? a study in multilingual parser optimization", "authors": [ { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Be\u00e1ta", "middle": [], "last": "Megyesi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "933--939", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Hall, Jens Nilsson, Joakim Nivre, G\u00fclsen Eryigit, Be\u00e1ta Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single malt or blended? a study in multilingual parser optimization. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 933-939.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A fullylexicalized probabilistic model for Japanese syntactic and case structure analysis", "authors": [ { "first": "D", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "176--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Kawahara and S. Kurohashi. 2006. A fully- lexicalized probabilistic model for Japanese syntactic and case structure analysis. Proceedings of the main conference on Human Language Technology Confer- ence of the North American Chapter of the Association of Computational Linguistics, pages 176-183.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Reranking and self-training for parser adaptation", "authors": [ { "first": "D", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL", "volume": "", "issue": "", "pages": "337--344", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. McClosky, E. Charniak, and M. Johnson. 2006. Reranking and self-training for parser adaptation. Pro- ceedings of the 21st International Conference on Com- putational Linguistics and the 44th annual meeting of the ACL, pages 337-344.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Characterizing the errors of data-driven dependency parsing models", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "122--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Joakim Nivre. 2007. Charac- terizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122-131.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proc. of the 11th Conf. of the European Chapter of the ACL (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of the 11th Conf. of the European Chapter of the ACL (EACL).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multilingual dependency analysis with a twostage discriminative parser", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lerman", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)", "volume": "", "issue": "", "pages": "216--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two- stage discriminative parser. In Proceedings of the Tenth Conference on Computational Natural Lan- guage Learning (CoNLL-X), pages 216-220, New York City, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Labeled pseudo-projective dependency parsing with support vector machines", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "G", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "S", "middle": [], "last": "Marinov", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre, J. Hall, J. Nilsson, G. Eryigit, and S Marinov. 2006. Labeled pseudo-projective dependency parsing with support vector machines.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The CoNLL 2007 shared task on dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "J", "middle": [], "last": "Hall", "suffix": "" }, { "first": "S", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "J", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "S", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "D", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proc. of the Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre, J. Hall, S. K\u00fcbler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proc. of the Joint Conf. on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An efficient algorithm for projective dependency parsing", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "149--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre. 2003. An efficient algorithm for projective dependency parsing. Proceedings of the 8th Inter- national Workshop on Parsing Technologies (IWPT), pages 149-160.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Annealing structural bias in multilingual weighted grammar induction", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "569--576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing struc- tural bias in multilingual weighted grammar induction. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguistics, pages 569-576, Sydney, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bootstrapping statistical parsers from small datasets", "authors": [ { "first": "M", "middle": [], "last": "Steedman", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "S", "middle": [], "last": "Clark", "suffix": "" }, { "first": "R", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "J", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "P", "middle": [], "last": "Ruhlen", "suffix": "" }, { "first": "S", "middle": [], "last": "Baker", "suffix": "" }, { "first": "J", "middle": [], "last": "Crim", "suffix": "" } ], "year": 2003, "venue": "The Proceedings of the Annual Meeting of the European Chapter of the ACL", "volume": "", "issue": "", "pages": "331--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Steedman, M. Osborne, A. Sarkar, S. Clark, R. Hwa, J. Hockenmaier, P. Ruhlen, S. Baker, and J. Crim. 2003. Bootstrapping statistical parsers from small datasets. The Proceedings of the Annual Meeting of the European Chapter of the ACL, pages 331-338.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Strictly lexical dependency parsing", "authors": [ { "first": "Qin Iris", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2005. Strictly lexical dependency parsing. In IWPT2005.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Fast, Accurate Deterministic Parser for Chinese", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Sagae", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2006, "venue": "Coling-ACL2006", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang, Kenji Sagae, and Teruko Mitamura. 2006. A Fast, Accurate Deterministic Parser for Chi- nese. In Coling-ACL2006.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Simple training of dependency parsers via structured boosting", "authors": [ { "first": "Wang", "middle": [], "last": "Qin Iris", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Iris Wang, Dekang Lin, and Dale Schuurmans. 2007. Simple training of dependency parsers via structured boosting. In IJCAI2007.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning structured classifiers for statistical dependency parsing", "authors": [ { "first": "Wang", "middle": [], "last": "Qin Iris", "suffix": "" } ], "year": 2007, "venue": "NAACL-HLT 2007 Doctoral Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Iris Wang. 2007. Learning structured classifiers for statistical dependency parsing. In NAACL-HLT 2007 Doctoral Consortium.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "H", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proc. of the 8th Intern. Workshop on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Yamada and Y. Matsumoto. 2003. Statistical depen- dency analysis with support vector machines. In Proc. of the 8th Intern. Workshop on Parsing Technologies (IWPT), pages 195-206.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "F-score relative to dependency length" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Example dependency graph." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Figure 3: Two solutions for \" (Specialistlevel)/ (working)/ (discussion)\"" }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "\u2022 Right-Arc(RA): Add an arc from TOP to NEXT; push NEXT onto the stack.\u2022 Reduce(RE): Pop the stack.\u2022 Shift(SH): Push NEXT onto the stack." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Improvement relative to dependency length 5" }, "TABREF0": { "type_str": "table", "num": null, "text": "", "content": "
: The results with different feature sets
UAS without p UAS with p
baseline85.2883.79
+D186.4085.02
+D2(OURS) 86.5285.16
", "html": null }, "TABREF1": { "type_str": "table", "num": null, "text": "", "content": "
: The results with different numbers of unla-
beled sentences
SentencesUAS without p UAS with p
0%(baseline) 85.2883.79
1%85.6884.40
2%85.6984.51
5%85.7884.59
10%85.9784.62
20%86.2584.86
50%86.3484.92
100%(OURS) 86.5285.16
", "html": null }, "TABREF2": { "type_str": "table", "num": null, "text": "The results on the sentences length up to 40", "content": "
UAS with p
Wang0579.9
McDonald&Pereira06 82.5
Wang0786.6
baseline87.1
OURS88.4
5 Analysis
5.1 Improvement relative to dependency length
", "html": null }, "TABREF3": { "type_str": "table", "num": null, "text": "reported this result.", "content": "
JJ NN NNNN NN NN
JJ NN NNNN NN NN
JJNN NNNN NN NN
AD VV VVJJ NN CC NN
AD VV VVJJ NN CC NN
AD VV VVJJ NN CC NN
", "html": null } } } }