{ "paper_id": "O07-4005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:08:24.193104Z" }, "title": "Improve Parsing Performance by Self-Learning", "authors": [ { "first": "Yu-Ming", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Duen-Chi", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Academia Sinica", "location": { "settlement": "Taipei", "country": "Taiwan" } }, "email": "kchen@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "There are many methods to improve performance of statistical parsers. Resolving structural ambiguities is a major task of these methods. In the proposed approach, the parser produces a set of n-best trees based on a feature-extended PCFG grammar and then selects the best tree structure based on association strengths of dependency word-pairs. However, there is no sufficiently large Treebank producing reliable statistical distributions of all word-pairs. This paper aims to provide a self-learning method to resolve the problems. The word association strengths were automatically extracted and learned by parsing a giga-word corpus. Although the automatically learned word associations were not perfect, the constructed structure evaluation model improved the bracketed f-score from 83.09% to 86.59%. We believe that the above iterative learning processes can improve parsing performances automatically by learning word-dependence information continuously from web.", "pdf_parse": { "paper_id": "O07-4005", "_pdf_hash": "", "abstract": [ { "text": "There are many methods to improve performance of statistical parsers. Resolving structural ambiguities is a major task of these methods. In the proposed approach, the parser produces a set of n-best trees based on a feature-extended PCFG grammar and then selects the best tree structure based on association strengths of dependency word-pairs. However, there is no sufficiently large Treebank producing reliable statistical distributions of all word-pairs. This paper aims to provide a self-learning method to resolve the problems. The word association strengths were automatically extracted and learned by parsing a giga-word corpus. Although the automatically learned word associations were not perfect, the constructed structure evaluation model improved the bracketed f-score from 83.09% to 86.59%. We believe that the above iterative learning processes can improve parsing performances automatically by learning word-dependence information continuously from web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "How to solve structural ambiguity is an important task in building a high-performance statistical parser, particularly for Chinese. Since Chinese is an analytic language, words can play different grammatical functions without inflection. A great deal of ambiguous structures would be produced by parsers if no structure evaluation were applied. There are three main steps in our approach that aim to disambiguate the structures. The first step is to have the parser produce n-best structures. Second, we extract word-to-word associations from large corpora and build semantic information. The last step is to build a structural evaluator to find the best tree structure from the n-best candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There have been some approaches proposed in the past to resolve structure ambiguities. For instance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Adding on lexical dependencies. Collins [1999] solves structural ambiguity by extracting lexical dependencies from Penn WSJ Treebank and applying dependencies to the statistic model. Lexical dependency (or Word-to-word association, WA) is one type of semantic information. It is a current trend to add on semantic related information in traditional parsers. Some incorporate word-to-word association in their parsing models, such as the Dependency Parsing in Chen et al. [2004] . They take advantage of statistical information of word dependency in the parsing process to produce dependency structures. However, word association methods suffer low coverage when lacking very large tree-annotated training corpora while checking dependency relationships between word pairs.", "cite_spans": [ { "start": 32, "end": 46, "text": "Collins [1999]", "ref_id": "BIBREF6" }, { "start": 459, "end": 477, "text": "Chen et al. [2004]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Adding on word semantic knowledge where CiLin and HowNet information are used in the statistic model in the experiment [Xiong et al. 2005] . Their results work to solve common parsing mistakes efficiently.", "cite_spans": [ { "start": 119, "end": 138, "text": "[Xiong et al. 2005]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Using a re-annotation method in grammar rules. Johnson [1998] thinks that re-annotating each node with the category of its parent category in Treebank is able to improve parsing performance. Klein et al. [2003] proposes internal, external, and tag-splitting annotation strategies to obtain better results.", "cite_spans": [ { "start": 47, "end": 61, "text": "Johnson [1998]", "ref_id": "BIBREF9" }, { "start": 191, "end": 210, "text": "Klein et al. [2003]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Building an evaluator. Some people re-rank the structure values and find the best parse [Collins 2000; Charniak et al. 2005] . At first, the parser produces a set of candidate parses for each sentence. Later, the re-ranker finds the best tree through relevance features. The performance is better than without the re-ranker. This paper is going to show a self-learning method to produce imperfect (due to errors produced by automatic parsing) but unlimited amount of word association data to evaluate the n-best trees produced by a feature-extended PCFG grammar. The parser with this WA evaluation is considerably superior to those without the evaluation.", "cite_spans": [ { "start": 88, "end": 102, "text": "[Collins 2000;", "ref_id": "BIBREF7" }, { "start": 103, "end": 124, "text": "Charniak et al. 2005]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The organization of the paper is as follows: Section 2 describes how to generate n-best trees in a simple way. In Section 3, we account for building word-to-word association and a primitive semantic class as well. As to the design of the evaluating model, our probability model, coordination of rule probability, and word association probability are presented in Section 4. In Section 5, we discuss and explain the experimental data and results. Ambiguities of PoS are to be considered in a practical system. Section 6 deals with further experiments on automatic tagging with PoS. Finally, we offer concluding remarks in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "It is clear that Treebanks [Chen et al. 2003 ] provide not only instances of phrasal structures and word dependencies but also their statistical distributions. Recently, probabilistic preferences for grammar rules and feature dependencies were incorporated to resolve structure-ambiguities and had great improvement on parsing performance. However, the automatically extracted grammars and feature-dependence pairs suffer the problem of low coverage. We proposed different approaches to solve these two different types of low coverage problems. For the low coverage of extracted grammar, a linguistically-motivated grammar generalization method is proposed [Hsieh et al. 2005] . The low coverage of word association pairs is resolved by a self-learning method of automatic parsing and extracting word dependency pairs from very large corpora.", "cite_spans": [ { "start": 27, "end": 44, "text": "[Chen et al. 2003", "ref_id": "BIBREF2" }, { "start": 657, "end": 676, "text": "[Hsieh et al. 2005]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extension of PCFG Grammars for Producing the N-best Trees", "sec_num": "2." }, { "text": "The linguistically-motivated generalized grammars are derived from probabilistic context-free grammars (PCFG) by right-association binarization and feature embedding. The binarized grammars have better coverage than the original grammars directly extracted from Treebank. Features are embedded into the lexical and phrasal categories to improve the precision of generalized grammar. The important features adopted in our grammar are described in the following: Head (Head feature): The PoS of phrasal head will propagate all intermediate nodes within the constituent. Example: S(NP(Head:Nh:\u4ed6)|S' -Head:VF (Head:VF:\u53eb|S' -Head:VF (NP(Head:Nb: \uf9e1\u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u76ae\u7403)))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extension of PCFG Grammars for Producing the N-best Trees", "sec_num": "2." }, { "text": "Linguistic motivations: To constrain the sub-categorization frame.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Extension of PCFG Grammars for Producing the N-best Trees", "sec_num": "2." }, { "text": "The PoS of the leftmost constitute will propagate one-level to its intermediate mother-node only. Example: S(NP(Head:Nh: \u4ed6 )|S' -Head:VF (Head:VF: \u53eb |S' -NP (NP(Head:Nb: \uf9e1 \u56db)| VP(Head:VC:\u64bf| NP(Head:Na:\u76ae\u7403)))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Left (Leftmost feature):", "sec_num": null }, { "text": "Linguistic motivation: To constrain linear order of constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Left (Leftmost feature):", "sec_num": null }, { "text": "If phrasal head exists in intermediate node, the nodes will be marked with feature 1; otherwise 0. Example: S(NP(Head:Nh: \u4ed6 )|S' -1 (Head:VF: \u53eb |S' -0 (NP(Head:Nb: \uf9e1 \u56db)|VP(Head:VC:\u64bf| NP(Head:Na:\u76ae\u7403)))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head 0/1 (Existence of phrasal head):", "sec_num": null }, { "text": "Linguistic motivation: To enforce unique phrasal head in each phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head 0/1 (Existence of phrasal head):", "sec_num": null }, { "text": "There are two functions of applying the embedded features: one is to increase the precision of the grammar and the other is to produce more candidate parse structures. With features embedded in phrasal categories, PCFG parsers are forced to produce varieties of different possible structures 1 . In order to achieve a better n-best oracle performance (i.e. the ceiling performance achieved by picking the best structure from n bests), we designed some different feature-embedded grammars and try to find a grammar with the better n-best oracle performance. For instance, \"S(NP(Head:Nh: \u4ed6 )|Head:VF: \u53eb | NP(Head:Nb: \uf9e1 \u56db )| VP(Head:VC:\u64bf| NP(Head:Na:\u76ae\u7403)))\". The explanations of feature sets are as follow.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Head 0/1 (Existence of phrasal head):", "sec_num": null }, { "text": "Intermediate node: add on \"Left and Head 1/0\" features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rule type-1:", "sec_num": null }, { "text": "if there is only one member in the NP, add on \"Head\" feature. Non-intermediate node: if there is only one member in the NP, add on \"Head\" feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-intermediate node:", "sec_num": null }, { "text": "Example: S -NP-Head:VF (NP -Head:Nh (Head:Nh:\u4ed6)|S' -Head:VF-1 (Head:VF:\u53eb |S' -NP-0 (NP -Head:Nb (Head:Nb:\uf9e1\u56db)|VP(Head:VC:\u64bf| NP -Head:Na (Head:Na:\u76ae\u7403)))))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-intermediate node:", "sec_num": null }, { "text": "Rules and their statistical probabilities are extracted from the transformed structures. The grammars are derived and trained from Sinica Treebank 2 . Sinica Treebank contains 38,944 tree-structures and 230,979 words. Table 1 shows the number of rule types in each grammar and Table 2 shows their 50-best oracle bracketed f-scores on three sets of testing data. The three sets of testing data used in our experiments represent \"moderate\", \"difficult\", and \"easy\" scale of Chinese language respectively. Black [1991] proposed two structural evaluating systems in 1991; the more strictly based is named PARSEVAL, and the less strictly based is crossing. We adopt PARSEVAL measures to evaluate the bracketed f-score. A bracket represents the phrasal scope. The reason we don't use a labeled f-score is that we aim to evaluate the phrasal scope, rather than the effect brought by the phrasal category. For example, the dependency information is much more related to the structure. From the above table, we can observe that the \"Rule type-3\" outperforms the \"Rule type-1\" and \"Rule type-2\". We adopt the approach used in Charniak et al. [2005] to analyze the n-best parse. Table 3 shows the best bracketed f-score values of different n-best parse trees. From the results, we observe that the improvement after n=5 is slight. Thus, the number of ambiguous candidates can be dynamically adjusted according to the complexity of input sentences. For normal sentences, we may consider to take n=5 in order to minimize the complexity. For long sentences or sentences with auto PoS tagging should take as large as n=50 to raise the ceiling of the best f-score. For each candidate tree, its syntactic plausibility is obtained by rule probabilities produced by PCFG parser. In addition to this, we need semantic related information to help with finding the best tree structure among candidate trees. In the next section, we will look at some methods of attaining semantic related information.", "cite_spans": [ { "start": 503, "end": 515, "text": "Black [1991]", "ref_id": "BIBREF0" }, { "start": 1116, "end": 1138, "text": "Charniak et al. [2005]", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 218, "end": 225, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 277, "end": 284, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1168, "end": 1175, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Non-intermediate node:", "sec_num": null }, { "text": "We could extract word knowledge from Treebanks, but the availability of a very large set of trees with rich linguistic annotations has long been a problem. A cheaper way to extract word knowledge is to automatically parse a large amount of data. We believe that with good parsing performance, we could get sufficient information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-Extracting World Knowledge", "sec_num": "3." }, { "text": "Therefore, in our experiments, we use a Gigaword Chinese corpus to extract word dependence pairs. The Gigaword corpus contains about 1.12 billion Chinese characters, including 735 million characters from Taiwan's Central News Agency (traditional characters), and 380 million characters from Xinhua News Agency (simplified characters) 3 . Word associations are extracted from the texts of Central News Agency (CNA). First we use Chinese Autotag System [Tsai et al. 2003 ], developed by Academia Sinica, to process the segmentation and PoS tagging of the texts. This system reaches a performance of 95% segmentation and 93% tagging accuracies. Then we parse each sentence 4 in the corpus and assign semantic roles to each constituent. Based on the head word information, we extract dependence word-pairs between head words and their arguments or modifiers. The following illustrates how the automatic knowledge extraction works. We input a Chinese sentence to the parser:", "cite_spans": [ { "start": 451, "end": 468, "text": "[Tsai et al. 2003", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Auto-Extracting World Knowledge", "sec_num": "3." }, { "text": "\u4ed6 \u53eb \uf9e1\u56db \u64bf \u76ae\u7403", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-Extracting World Knowledge", "sec_num": "3." }, { "text": "Here is the sentence after segmentation and PoS tagging:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "\u4ed6(Nh) \u53eb(VF) \uf9e1\u56db(Nb) \u64bf(VC) \u76ae\u7403(Na)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "The parser analyzes the sentence structure and assigns roles to each phrase as follows. Then, word-pair knowledge of heads and their modifiers are extracted as shown in Figure 1 . Figure 1 shows the examples of extracted word associations. \"Role1/PoS1/Word1 and Role2/Role2/Word2\" represent the right-and left-part of the word-pairs. \"Role\", \"PoS\", and \"Word\" here mean semantic role, part-of-speech and word respectively. To reduce the number of word association types, we transform the original word-pairs into three simplified types of the word pairs: In the word pairs, \"H\" denotes Head, \"W\" means word, and \"C\" refers to the simplified PoS tag 5 , \"X\" refers to any semantic role other than Head role. So, we get basic information of experimental data as follows:", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 180, "end": 188, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "Role1 PoS1 Word1 Role2 PoS2 Word2 X Nh \u4ed6 H VF \u53eb H VF \u53eb X Nb \uf9e1\u56db H VF \u53eb X VC \u64bf H VC \u64bf X Na \u76ae\u7403", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "The processes above are repeated for each new input sentence from the Gigaword corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "Finally, we obtain a great deal of knowledge about dependent word pairs and their association strengths. In our experiments, we have 37,489,408 sentences that are successfully parsed and contain word association information. The number of extracted word associations is 221,482,591. The extracted word to word associations that undergo structure analysis and head word assignment are not perfectly correct, but they are more informative and precise than simply taking words on the left and right hand window.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ta jiao Li-si jian qiu He ask Li-si pick ball \"He asked Li-si to pick up the ball.\"", "sec_num": null }, { "text": "Data sparseness is always a problem of statistical evaluation methods. As mentioned in the last section, we automatically segment, tag, parse and assign roles in CNA data, and then extract word associations. We test our extracted word association data in five different levels of granularities. Level-1 to Level-5 represents HWC_WC, HW_W, HC_WC, HW_C, and HC_C respectively. The 5 levels of word associations derived from Figure 1 Theoretically, the precision of fine-grain level like HWC_WC is much better, but it suffers the problem of data sparseness, hence, its coverage rate is low; on the other hand, the coarse-grain level has best coverage rate but relatively low precision. This is the trade-off between precision and coverage. Therefore, we carry out a series of experiments to find a balanced measurement by linear combination of different level associations. There will be experimental results in the following sections.", "cite_spans": [], "ref_spans": [ { "start": 422, "end": 430, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Coverage Rates of the Word Associations", "sec_num": "3.1" }, { "text": "Why not use HWC_W or HC_W? From our observation, we have found that these two show similar performance with HWC_WC and HC_WC respectively; therefore, we exclude them. Besides, there are some asymmetric representations, such as the use of \"HW_C\". They are used to raise the coverage rate in word association while not being too general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coverage Rates of the Word Associations", "sec_num": "3.1" }, { "text": "We like to see the bi-gram coverage rates for each level of representation. After CNA producing word associations in each level, we observe the relationship between the amount of word associations and the coverage rates of the three texts: Sinica, Sinorama, and Textbook. We extracted word associations from the three data sets in each level and calculated their coverage rates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coverage Rates of the Word Associations", "sec_num": "3.1" }, { "text": "We tested the coverage rates for 10 different size word association data, of which each was extracted from different size corpora. Figure 2 shows coverage relationships between five levels and sizes of word association data for three testing data. Figure 2 shows that larger data increases the coverage rates, but the coverage of the fine-grained level word associations, e.g. Level-1 (HWC_WC), is only about 70%, which is far from saturation. Nonetheless, the coverage rate can be improved by reading more texts from the web. The coarse-grained level associations, e.g. Level-5 (HC_C), cover the most bi-gram categories. However, it may not be very useful, since syntactic associations which are partially embedded in the PCFG are redundant. To attain a better evaluation model, we derived new associations between semantic classes. Criteria for semantic classification are discussed in the following section. ", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 139, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 248, "end": 256, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Coverage Rates of the Word Associations", "sec_num": "3.1" }, { "text": "For precision and coverage tradeoffs, we face a dilemma of using word or PoS category. We find that the coverage of word is low, though its precision is high; on the contrary, the coverage of PoS is too high to be discriminative. We hope to find a classification that covers enough information and is discriminative as well; that is, a classification system that falls between word and PoS category. A semantic classification is the solution. There are many ways to classify semantic properties of words. Xiong et al. [2005] adopt CiLin and HowNet as their semantic classes in their experiment; however, the data sparseness is still a problem to be solved. Here, we propose a simple approach to build a semantic-class-based association strength for word pairs, which will be our Level-6 (HS_S). Semantic class information is put into Level-6 in order to get high coverage and to avoid redundant syntactic associations in other levels. Besides, it can smooth the problem of data sparseness.", "cite_spans": [ { "start": 505, "end": 524, "text": "Xiong et al. [2005]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Semantic Knowledge", "sec_num": "3.2" }, { "text": "The idea is to classify words into their head morpheme. It begins with the transformation of every input \"WORD, POS\" in the data. We adopt the affix database of high frequency verbs and nouns [Chiu et al. 2004 ] to set up noun and verb classes. There are 34,857 examples of compound words in the database. As to determinative measures (DM), we refer to the dictionary of measure words, and divide the DMs in the data into thirteen categories, according to the meanings of the measure words. The thirteen categories include general, event, length, science, approximate measures, weight, square measures, container, capacity, time, currency value, classification measures, and measures of verbs. Finally, we consult parts of speech analyses [CKIP 1993 ] and set up the transformation rules to transform a word-PoS pair into its semantic class. The transformation algorithm is shown at Appendix A. Take \"\uf9e1 \u56db, Nb\" as example, its semantic class is \"PersonalName(\u4eba\u540d)\" in our classification. In another instance, the semantic class of \"\u76ae\u7403, Na\" is \"Na_\u7403\". The transformation rules are PoS dependent. Each PoS is referred to CKIP [1993] , which explains the PoS with words and examples. We set up discriminative subcategorization on some parts-of-speech: N/P/D/A according to the distribution of PoS and word frequency. As to the verbs, we use an initial step to assign initial value. Take PoS as \"A\" for example, adding prefix information is more useful than using \"A\" alone.", "cite_spans": [ { "start": 192, "end": 209, "text": "[Chiu et al. 2004", "ref_id": "BIBREF4" }, { "start": 739, "end": 749, "text": "[CKIP 1993", "ref_id": null }, { "start": 1122, "end": 1128, "text": "[1993]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Incorporating Semantic Knowledge", "sec_num": "3.2" }, { "text": "PoS1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Role1", "sec_num": null }, { "text": "Word1 Class1 Role2 PoS2 Word2 Class2 X Nh \u4ed6 \u4ed6 H VF \u53eb \u53eb H VF \u53eb VF_\u53eb X Nb \uf9e1\u56db PersonalName H VF \u53eb VF_\u53eb X VC \u64bf VC_\u64bf H VC \u64bf VC_\u64bf X Na \u76ae\u7403 Na_\u7403", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Role1", "sec_num": null }, { "text": "The following example is the result of DM, prefix and affix, through a function in Level-6 (HS_S): It is necessary to discriminate syntactic head from semantic head in word association extraction of GPs and PPs. From the table above, Row 4, signified by the different color shows that \"\uf983\u9014\" is the semantic head of the GP \"\uf983\u9014..\u4e2d\", while the word \"\u4e2d\" is the syntactic head of the phrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Role1", "sec_num": null }, { "text": "S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Role1", "sec_num": null }, { "text": "We estimate the word association coverage rate of the Level-6 associations. From the results shown in Figure 3 , the coverage rate of Level-6 is higher than Level-2, and the problem of data sparseness is indeed moderately smoothed. Next, we will use different levels of associations to construct an evaluation model to find the best structure among the numerous ambiguous candidates.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 3", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Role1", "sec_num": null }, { "text": "A sentence structure is evaluated by its syntactic and semantic plausibility. The syntactic plausibility is modeled by products of phrase rule probabilities of its syntactic tree. The semantic plausibility is modeled by the word association strengths between head words and their arguments or modifiers. For an input sentence S, the feature-embedded PCFG parser produces its n-best trees 1 { ( ),..., ( )} n y s y s . The evaluating model finds out the best structure according to the rule probability (syntactic) and corresponding word association probability (semantic). Rule probabilities are generated by the PCFG parser when n-best trees are produced. We will estimate word association probabilities in the following formula. In the formula, \"Head\" means the Head of a word association, notated as HWC, HC, or HW. \"Modify\" means dependent daughter, notated as WC, W, or C. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Evaluation Model", "sec_num": "4." }, { "text": "We calculate related \u03bb and \u03b8 values from the development sets. The development sets are adopted from trees in training data. In evaluation, we substitute \u03bb and \u03b8 for every interval of 0.1 from 0 to 1. Then, we find out the best results in certain probability. The experiment results will be shown in the following section. Moreover, we justify whether the word associations are reasonable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Evaluation Model", "sec_num": "4." }, { "text": "For instance, the following example has eight different ambiguous parsing results produced by the parser. 4 shows the WA values of the first sentence at each level. Similarly the WA data are produced for all other input sentences. Then, we derive the evaluation values (", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 107, "text": "4", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Building Evaluation Model", "sec_num": "4." }, { "text": "Value y s for each ambiguous sentence and find the best result with respect to different weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Building Evaluation Model", "sec_num": "4." }, { "text": "The parsing performance and our evaluating model are evaluated by standard PARSEVAL metrics. In our experiments, we only use sentences longer than 6 words for our testing, since Hsieh et al. [2005] found that the bracketing f-score of short sentence (the length of a sentence is from 1 to 5 words) is over 90%. We use the n-best tree structures produced from \"Rule type-3\" mentioned in the Section 2. The oracle 50-best and the top 1-best bracketed f-scores of \"Rule type-3\" are listed in Table 4 . Take the data of Sinica for example, we find that for the 50-best results, the oracle score is 90.11%. In contrast the 1-best f-score is 83.09%. To simplify our evaluation model, we try to find the most effective levels of associations first. In turn, the parser uses only one level of association and rule probabilities to select the best structure from n candidates. That is: Figure 5 displays the bracketing f-scores of testing data for each different level of association. The best results of Level-1 slightly surpass that of Level-2; results of Level-6 overtake that of Level-3; Level-6 has better performance than Level-5. Therefore, in considering type of information, data coverage, and dimension reduction only three levels (Level-1, Level-4 and Level-6) are taken into consideration to form the final evaluation model. Finally, we adjust the weights of L1, L4, and L6 associations and rule probabilities to evaluate the plausibility of structures from the 50-best parses tree of the developing data and the results of experiments on the three testing data are shown in Table 5 . For our experiments, \u03bb =0.7, 1 \u03b8 =0.7, 4 \u03b8 =0.3, and 6 \u03b8 =0.5. From the results shown in Table 5 , we see that semantic information is effective in finding better structures. About 3.5%~5.2% of the bracketing f-scores are raised. In Charniak et al. [2005] , the f-score was improved from 89.7% (without re-ranking) to 91.02% (with re-ranking) for English 6 ; the oracle f-score was 96.8% for n-best in their paper. We also believe that with more data parsed, better word-association values will be obtained; hence, the parsing performance will be improved by self-learning. Our WA was first extracted from the 1-best result from parser. With the parser producing n-best and the evaluating system finding the best structure, we can continuously derive more and better word associations. Similarly, if we have a better WA referent statistic, we should be able to choose the better structure. This is the idea of how self-learning works. The left side of Figure 6 denotes how we produce knowledge initially, and the right side of Figure 6 explains the repeated procedure of automatic knowledge extraction and accumulation. From the results shown in Table 4 and Table 5 , we see that there is much space for improvement. 6 The English parser has better evaluating results than the Chinese one due to the better performance of the parser and language differences. The charateristic of a strictly regulated grammar in English gives an advantage in parsing. Nonetheless, we have to admit that there is plenty of room for improvement in Chinese parsing. ", "cite_spans": [ { "start": 178, "end": 197, "text": "Hsieh et al. [2005]", "ref_id": "BIBREF8" }, { "start": 1821, "end": 1843, "text": "Charniak et al. [2005]", "ref_id": "BIBREF1" }, { "start": 2805, "end": 2806, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 489, "end": 496, "text": "Table 4", "ref_id": "TABREF10" }, { "start": 877, "end": 885, "text": "Figure 5", "ref_id": "FIGREF11" }, { "start": 1578, "end": 1585, "text": "Table 5", "ref_id": "TABREF11" }, { "start": 1677, "end": 1684, "text": "Table 5", "ref_id": "TABREF11" }, { "start": 2540, "end": 2548, "text": "Figure 6", "ref_id": "FIGREF12" }, { "start": 2615, "end": 2623, "text": "Figure 6", "ref_id": "FIGREF12" }, { "start": 2734, "end": 2741, "text": "Table 4", "ref_id": "TABREF10" }, { "start": 2746, "end": 2753, "text": "Table 5", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "_ _ _ _ ( ) ( ( )) (", "eq_num": "( )" } ], "section": "Experimental Results", "sec_num": "5." }, { "text": "Perfect testing data was used in the above experiments without considering PoS tagging errors. However, in reality, PoS tagging errors will degenerate parsing performance. The real parsing performance of accepting input from a PoS tagging system is shown in Table 6 (1). In this table, \"Autotag\" means to markup the best PoS on the segmented data. The na\u00efve approach to overcome the PoS tagging errors is to delay some of the ambiguous PoS resolution for words with lower confidence tagging scores and leave the ambiguous PoS to be resolved in the parsing stage. In Tsai et al. [2003] , the tagging confidence of each word is measured by the following value:", "cite_spans": [ { "start": 566, "end": 584, "text": "Tsai et al. [2003]", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Further Experiments on Sentences with Automatic PoS Tagging", "sec_num": "6." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") ( ) ( ) ( value Confidence , 2 , 1 , 1 w c P w c P w c P + =", "eq_num": "(7)" } ], "section": "Further Experiments on Sentences with Automatic PoS Tagging", "sec_num": "6." }, { "text": "where P(c1,w) and P(c2,w) are probabilities assigned by the tagging model for the best candidate \"c1,w\" and the second best candidate \"c2,w\". Some examples follow: In Table 6 (2), \"Autotag with confidence value=1.0\" means that if confidence value \u2266 1.0, we list all possible PoSs for the parser to decide. The experimental results of the 1-best, Table 6 (2), show that delaying ambiguous PoS resolution does not improve parsing performance, since PoS ambiguities increase structural ambiguities and the PCFG parser is not robust enough to select better syntactic structures. However, for the experiment of 50-best, take the oracle score as the example; the 50-best oracle f-scores shown in Table 6 (2) are better than the results without leaving ambiguous tags as shown in Table 6 (1). Therefore, it is more likely to find better results after applying our evaluation model on the set of data with better oracle scores. Hence, we try to see the power of our evaluation model by leaving ambiguous PoS tags for the testing data.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Table 6", "ref_id": null }, { "start": 346, "end": 353, "text": "Table 6", "ref_id": null }, { "start": 690, "end": 697, "text": "Table 6", "ref_id": null }, { "start": 773, "end": 780, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Further Experiments on Sentences with Automatic PoS Tagging", "sec_num": "6." }, { "text": "(1)Autotag; (2)Autotag with confidence value = 1.0. We then apply our evaluation model to select the best structure from the 50-best parses. The results are shown in Table 7 . The experiment above takes \"Rule type-3\" for n-best parses. The bracketed f-score is raised from the original 73.41% to 79.34%, for about 4% improvement in the Sinica testing data. Sinorama data is improved from 68.34% to 74.78%. Textbook data is from 77.83% to 82.59%. This proves that our evaluating model is robust enough to handle ambiguous PoS tagging and produces better results than solely using the unique tag produced by Autotag. ", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 173, "text": "Table 7", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "Table 6. Oracle bracketed f-scores of different autotag for parsing:", "sec_num": null }, { "text": "Parsers of any language aim to correctly analyze the syntactic structure of a sentence, often with the help of semantic information. This paper shows a self-learning method to produce imperfect (due to errors produced by automatic parsing) but unlimited amount of word association data to evaluate the n-best trees produced by a feature-extended PCFG grammar. We prove that, although the statistical association strengths produced by automatic parsing are not perfect, the extracted data is reliable enough in measuring plausibility of ambiguous structures. The parser with this WA evaluation is considerably superior to those without evaluation. We believe that the above iterative learning processes can improve parsing performances automatically by learning word-dependence knowledge continuously from web. We also propose a method to modify our grammars to increase the oracle scores of the produced n-best sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "On the other hand, we offer a general syntactic and semantic evaluation model. We input n-best parses to our evaluating model. The evaluating model selects the best parse from this set of parses using a rule and semantic probability. The system we described, using the standard PARSEVAL framework, has a bracketed f-score of the selected trees, which is 86.59% higher than the original 1-best. Furthermore, the ambiguous PoS of a word is also parsed and evaluated on n-best, and we prove that our evaluating model is robust enough to improve parsing results on sentences with ambiguous PoS tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "From our experiment results, we find that sentences with coordinate structures are more difficult to deal with. The information of semantic parallelism instead of semantic dependencies is required for solving conjunctive structures. The extracted word associations don't have enough discriminative power to resolve both syntactic and semantic symmetry of conjunctive structures. The possible improvement may come from modifying the extraction method or predicting their plausible ranges before parsing. As to other difficult sentences, for example, in Figure 2 , the coverage rate of Level 2 (HW_W) associations is only about 70%, which is far less than needed. We may expand our data to read more web texts to resolve this problem.", "cite_spans": [], "ref_spans": [ { "start": 552, "end": 560, "text": "Figure 2", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "In future research, we plan to improve the quality of word-association. Four aspects need to be addressed: improving the accuracy of the PoS tagger, enhancing the parser's ability to solve common mistakes (such as parsing conjunctive structures), extracting more word associations by reading, and parsing text from web. As to the evaluation model, properly corresponding semantic classifications from coarse to fine-grained categories are needed in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "The parser adopts an Earley's Algorithm. It is a top-down left-to-right algorithm. So, in parts that have the same non-terminals, we keep only the best structure after pruning, to reduce the load of calculation and thus fasten the parsing speed. Therefore, if we add different features in the Top-Level rules, we'll get more results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://treebank.sinica.edu.tw/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2003T09 4 An existing parser is used to produce 1-best tree of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The simplified way please refer to CKIP 93-05 Technical Report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by National Science Council under Grant NSC 95-2422-H-001-008-and National Digital Archives Program Grant 95-0210-29-\u620a -13-09-00-2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "/* e.g. Nep/Neqa/Neqb/Nes/Neu */ if POS is pronoun then CLASS=WORD; /* e.g. Nh */ if POS is time noun then CLASS='Time'; /* e.g. Nd */ if POS is Postposition/Place Noun/Localizer then CLASS='Location';/* e.g. Ng/Nc/Ncd */ if POS is Proper Noun and is family names then CLASS='PersonalName'; /* e.g. Nb */ if POS is aspectual adverb then CLASS=POS /* e.g. Di */ if POS is pre/post-verbal adverb of degree then CLASS=' Df'+suffix(Word) /*e.g. Dfa/Dfb */ if POS is VD/VCL/VL then CLASS=POS+suffix (WORD) ", "cite_spans": [ { "start": 417, "end": 433, "text": "Df'+suffix(Word)", "ref_id": null }, { "start": 494, "end": 500, "text": "(WORD)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars", "authors": [ { "first": "E", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickenger", "suffix": "" }, { "first": "C", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "M", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "T", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Workshop on Speech and Natural language", "volume": "", "issue": "", "pages": "306--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, E., S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski, \"A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars,\" In Proceedings of the Workshop on Speech and Natural language, 1991, pp. 306-311.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E., and M. Johnson, \"Coarse-to-fine n-best parsing and MaxEnt discriminative reranking,\" In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, 2005, Ann Arbor, MI, pp. 173-180.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Sinica Treebank: design criteria, representational issues and implementation", "authors": [ { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "F.-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "C.-C", "middle": [], "last": "Luo", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C.-J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Z.-M", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2003, "venue": "Building and Using Parsed Corpora. Text, Speech and Language Technology", "volume": "20", "issue": "", "pages": "231--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K.-J., C.-R. Huang, F.-Y. Chen, C.-C. Luo, M.-C. Chang, C.-J. Chen, and Z.-M. Gao, \"Sinica Treebank: design criteria, representational issues and implementation,\" In Anne Abeille, (ed.): Building and Using Parsed Corpora. Text, Speech and Language Technology, 2003, 20, pp. 231-248.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deterministic Dependency Structure Analyzer for Chinese", "authors": [ { "first": "Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "M", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the First International Join Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "135--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Y., M. Asahara, and Y. Matsumoto, \"Deterministic Dependency Structure Analyzer for Chinese,\" In Proceedings of the First International Join Conference on Natural Language Processing, 2004, Sanya City, Hainan Island, China, pp. 135-140.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Compositional semantics of mandarin affix verbs", "authors": [ { "first": "C.-M", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "J.-Q", "middle": [], "last": "Luo", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ROCLING XVI: Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "131--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chiu, C.-M., J.-Q. Luo, and K.-J. Chen, \"Compositional semantics of mandarin affix verbs.\" In Proceedings of ROCLING XVI: Conference on Computational Linguistics and Speech Processing, 2004, Taipei, pp. 131-139.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Chinese Knowledge Information processing)", "authors": [], "year": 1993, "venue": "CKIP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "CKIP (Chinese Knowledge Information processing), \"The categorical analysis of Chinese,\" Technical Report No. 93-05, Institute of Information Science Academia Sinica, Taipei, 1993.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Head-driven statistical models for natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M., \"Head-driven statistical models for natural language parsing,\" PhD thesis, University of Pennsylvania, 1999.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "Machine Learning: Proceedings of the Seventeenth International Conference (ICML 2000", "volume": "", "issue": "", "pages": "175--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M., \"Discriminative reranking for natural language parsing,\" In Machine Learning: Proceedings of the Seventeenth International Conference (ICML 2000), 2000, Morgan Kaufmann, San Francisco, CA, pp. 175-182.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Linguistically-motivated grammar extraction, generalization and adaptation", "authors": [ { "first": "Y.-M", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "D.-C", "middle": [], "last": "Yang", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Second International Join Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "177--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsieh, Y.-M., D.-C. Yang, and K.-J. Chen, \"Linguistically-motivated grammar extraction, generalization and adaptation,\" In Proceedings of the Second International Join Conference on Natural Language Processing, 2005, Jeju Island, Republic of Korea, pp. 177-187.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "PCFG models of linguistic tree representations", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "4", "pages": "613--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johnson, M., \"PCFG models of linguistic tree representations,\" Computational Linguistics, 1998, 24(4), pp. 613-632.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, D., and C. D. Manning, \"Accurate unlexicalized parsing,\" In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, 2003, Sapporo, Japan, pp. 423-430.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Context-rule model for PoS tagging", "authors": [ { "first": "Y.-F", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "K.-J", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 17th Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "146--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsai, Y.-F., and K.-J. Chen, \"Context-rule model for PoS tagging,\" In Proceedings of 17th Pacific Asia Conference on Language, Information and Computation (PACLIC 17), 2003, COLIPS, Sentosa, Singapore, pp. 146-151.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Parsing the Penn Chinese Treebank with semantic knowledge", "authors": [ { "first": "D", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Qian", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Second International Join Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "70--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiong, D., S. Li, Q. Liu, S. Lin, and Y. Qian, \"Parsing the Penn Chinese Treebank with semantic knowledge,\" In Proceedings of the Second International Join Conference on Natural Language Processing, 2005, Jeju Island, Republic of Korea, pp. 70-81.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Example: S(NP -Head:Nh (Head:Nh:\u4ed6)|S' -Head:VF-1 (Head:VF:\u53eb|S' -NP-0 (NP -Head:Nb (Head:Nb:\uf9e1 \u56db)|VP(Head:VC:\u64bf| NP -Head:Na (Head:Na:\u76ae\u7403)))))Rule type-2: Intermediate node: add on \"Left and Head 1/0\" features. Non-intermediate node: add on \"Head and Left\" features, if there is only one member in the NP, add on \"Head\" feature. Example: S -NP-Head:VF (NP -Head:Nh (Head:Nh:\u4ed6)|S' -Head:VF-1 (Head:VF:\u53eb |S' -NP-0 (NP -Head:Nb (Head:Nb:\uf9e1\u56db)|VP -Head:VC (Head:VC:\u64bf| NP -Head:Na (Head:Na:\u76ae\u7403))))) Rule type-3: Intermediate node: add on \"Left, and Head 1/0\" features. Top-Level node: add on \"Head and Left\" features. (see example of S -NP-Head:VF )" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "A sample for word association extraction." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "(a) head word on the left hand side: (H_W_C, X_W_C); (b) head word on the right hand side: (X_W_C, H_W_C); (c) coordinating structure: (H_W_C, H_W_C)." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Coverage rates vs. size of Corpus: (a) Sinica; (b) Sinorama; (c) Textbook." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "(theme:NP(quantifier:DM:\uf978\u500b|Head:Nab:\u4eba)|deontics:Dbab:\u80fd|Head:VC1:\u5728 |goal:GP(DUMMY:NP(property:Nad:\u4eba\u751f|Head:Nad:\uf983\u9014)|Head:Ng:\u4e2d))" }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "WA coverage rate of Level-6." }, "FIGREF6": { "uris": null, "num": null, "type_str": "figure", "text": "a common problem in dealing with corpora. A minimal value \u03c3 is used to smooth data sparseness: to each candidate tree Yn(S) is defined as: is the total word association value in different level n. RuleValue and WAValue are normalized, i.e. (i-min)/(max-min). The following shows weighting in different levels and explanation of formula: collocating with rule probability, we hope to find the best tree *" }, "FIGREF7": { "uris": null, "num": null, "type_str": "figure", "text": "Input segmentation with PoS tag: \u6211\u5011(Nh) \u90fd(D) \u559c\u6b61(VK) \u8774\u8776(Na) Parsing results: #1:1.[0] S(NP(Head:Nh:\u6211\u5011)|D:\u90fd|Head:VK:\u559c\u6b61|NP(Head:Na:\u8774\u8776))# #1:2.[0] NP(Nh:\u6211\u5011|Head:NP(VP(D:\u90fd|Head:VK:\u559c\u6b61)|Head:Na:\u8774\u8776))# #1:3.[0] VP(PP(Head:Nh:\u6211\u5011)|VP(D:\u90fd|Head:VK:\u559c\u6b61)|Head:Na:\u8774\u8776)# #1:4.[0] NP(VP(Head:Nh:\u6211\u5011)|Head:NP(VP(D:\u90fd|Head:VK:\u559c\u6b61)|Head:Na:\u8774\u8776))# #1:5.[0] VP(Head:VP(VP(Head:Nh:\u6211\u5011)|VP(D:\u90fd|Head:VK:\u559c\u6b61))|NP(Head:Na:\u8774\u8776))# #1:6.[0] NP(S(NP(Head:Nh:\u6211\u5011)|D:\u90fd|Head:VK:\u559c\u6b61)|Head:Na:\u8774\u8776)# #1:7.[0] VP(PP(Head:Nh:\u6211\u5011)|Head:VP(VP(D:\u90fd|Head:VK:\u559c\u6b61)|VP(Head:Na:\u8774\u8776)))# #1:8.[0] VP(Head:VP(VP(Head:Nh:\u6211\u5011)|VP(Head:D:\u90fd))|Head:VP(Head:VK:\u559c\u6b61|NP(Head:Na:\u8774\u8776)))# Prob (log 2 )" }, "FIGREF8": { "uris": null, "num": null, "type_str": "figure", "text": "An Example of Rule calculationand and WA probability." }, "FIGREF9": { "uris": null, "num": null, "type_str": "figure", "text": "Figure 4 shows the WA values of the first sentence at each level. Similarly the WA data are produced for all other input sentences. Then, we derive the evaluation values ( ( ))" }, "FIGREF11": { "uris": null, "num": null, "type_str": "figure", "text": "Matching rule with WA value in each level (sentence length \u2265 6)." }, "FIGREF12": { "uris": null, "num": null, "type_str": "figure", "text": "Procedure of self-learning." }, "FIGREF13": { "uris": null, "num": null, "type_str": "figure", "text": "({Nh,Nes}) \u53eb({VG,VF}) \uf9e1\u56db(Nb) \u64bf({VC,VB}) \u76ae\u7403(Na) confidence value=0.8 \u4ed6(Nh) \u53eb({VG,VF}) \uf9e1\u56db(Nb) \u64bf(VC) \u76ae\u7403(Na) confidence value<0.5 \u4ed6(Nh) \u53eb(VF) \uf9e1\u56db(Nb) \u64bf(VC) \u76ae\u7403(Na)" }, "TABREF1": { "html": null, "text": "", "content": "
Rule Type
Rule-1 Rule-2Rule-3
Rule number9,89926,79713,652
", "num": null, "type_str": "table" }, "TABREF2": { "html": null, "text": "", "content": "
Rule type
", "num": null, "type_str": "table" }, "TABREF3": { "html": null, "text": "", "content": "
Testing Datan
125102550
Sinica91.88 94.39 95.91 96.17 96.25 96.25
", "num": null, "type_str": "table" }, "TABREF10": { "html": null, "text": "", "content": "
. The bracketed f-scores of 1-best and oracle performance of 50-best. (sentence length \u2265 6)
Top n-bestTesting data Sinica Sinorama Textbook
1-best83.0977.5483.19
50-best90.1187.4489.94
", "num": null, "type_str": "table" }, "TABREF11": { "html": null, "text": "", "content": "
. The bracketed f-scores of 50-best parses (sentence length \u2265 6)
ModelsSinicaTesting data Sinorama Textbook
R, L1, L4, L6 86.5982.8185.97
1-best83.0977.5483.19
50-best90.1187.4489.94
", "num": null, "type_str": "table" }, "TABREF14": { "html": null, "text": "", "content": "
ModelsTesting data
Sinica SinoramaTextbook
R, L1, L4, L6 79.3474.7882.59
1-best73.4168.3477.83
50-best86.4583.9988.83
", "num": null, "type_str": "table" } } } }