{ "paper_id": "I05-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:36.128164Z" }, "title": "PP-Attachment Disambiguation Boosted by a Gigantic Volume of Unambiguous Examples", "authors": [ { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo Bunkyo-ku", "postCode": "113-8656", "settlement": "Tokyo", "country": "Japan" } }, "email": "kawahara@kc.t.u-tokyo.ac.jp" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo Bunkyo-ku", "postCode": "113-8656", "settlement": "Tokyo", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a PP-attachment disambiguation method based on a gigantic volume of unambiguous examples extracted from raw corpus. The unambiguous examples are utilized to acquire precise lexical preferences for PP-attachment disambiguation. Attachment decisions are made by a machine learning method that optimizes the use of the lexical preferences. Our experiments indicate that the precise lexical preferences work effectively.", "pdf_parse": { "paper_id": "I05-1017", "_pdf_hash": "", "abstract": [ { "text": "We present a PP-attachment disambiguation method based on a gigantic volume of unambiguous examples extracted from raw corpus. The unambiguous examples are utilized to acquire precise lexical preferences for PP-attachment disambiguation. Attachment decisions are made by a machine learning method that optimizes the use of the lexical preferences. Our experiments indicate that the precise lexical preferences work effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "For natural language processing (NLP), resolving various ambiguities is a fundamental and important issue. Prepositional phrase (PP) attachment ambiguity is one of the structural ambiguities. Consider, for example, the following sentences [1] :", "cite_spans": [ { "start": 239, "end": 242, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) a. Mary ate the salad with a fork. b. Mary ate the salad with croutons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The prepositional phrase in (1a) \"with a fork\" modifies the verb \"ate\", because \"with a fork\" describes how the salad is eaten. The prepositional phrase in (1b) \"with croutons\" modifies the noun \"the salad\", because \"with croutons\" describes the salad. To disambiguate such PP-attachment ambiguity, some kind of world knowledge is required. However, it is currently difficult to give such world knowledge to computers, and this situation makes PP-attachment disambiguation difficult. Recent state-of-the-art parsers perform with the practical accuracy, but seem to suffer from the PP-attachment ambiguity [2, 3] .", "cite_spans": [ { "start": 605, "end": 608, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 609, "end": 611, "text": "3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For NLP tasks including PP-attachment disambiguation, corpus-based approaches have been the dominant paradigm in recent years. They can be divided into two classes: supervised and unsupervised. Supervised methods automatically learn rules from tagged data, and achieve good performance for many NLP tasks, especially when lexical information, such as words, is given. Such methods, however, cannot avoid the sparse data problem. This is because tagged data are not sufficient enough to discriminate a large variety of lexical information. To deal with this problem, many smoothing techniques have been proposed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The other class for corpus-based approaches is unsupervised learning. Unsupervised methods take advantage of a large number of data that are extracted from large raw corpora, and thus can alleviate the sparse data problem. However, the problem is their low performance compared with supervised methods, because of the use of unreliable information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For PP-attachment disambiguation, both supervised and unsupervised methods have been proposed, and supervised methods have achieved better performance (e.g., 86.5% accuracy by [1] ). Previous unsupervised methods tried to extract reliable information from large raw corpora, but the extraction heuristics seem to be inaccurate [4, 5] . For example, Ratnaparkhi extracted unambiguous word triples of (verb, preposition, noun) or (noun, preposition, noun), and reported that their accuracy was 69% [4] . This means that the extracted triples are not truly unambiguous, and this inaccurate treatment may have led to low PP-attachment performance (81.9%).", "cite_spans": [ { "start": 176, "end": 179, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 327, "end": 330, "text": "[4,", "ref_id": "BIBREF3" }, { "start": 331, "end": 333, "text": "5]", "ref_id": "BIBREF4" }, { "start": 496, "end": 499, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes a PP-attachment disambiguation method based on an enormous amount of truly unambiguous examples. The unambiguous examples are extracted from raw corpus using some heuristics inspired by the following example sentences in [6] :", "cite_spans": [ { "start": 241, "end": 244, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) a. She sent him into the nursery to gather up his toys.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In these sentences, the underlined PPs are unambiguously attached to the double-underlined verb or noun. The extracted unambiguous examples are utilized to acquire precise lexical preferences for PP-attachment disambiguation. Attachment decisions are made by a machine learning technique that optimizes the use of the lexical preferences. The point of our work is to use a \"gigantic\" volume of \"truly\" unambiguous examples. The use of only truly unambiguous examples leads to statistics of high-quality and good performance of disambiguation in spite of the learning from raw corpus. Furthermore, by using a gigantic volume of data, we can alleviate the influence of the sparse data problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. The road to London is long and winding.", "sec_num": null }, { "text": "The remainder of this paper is organized as follows. Section 2 briefly describes the globally used training and test set of PP-attachment. Section 3 summarizes previous work for PP-attachment. Section 4 describes a method of calculating lexical preference statistics from a gigantic volume of unambiguous examples. Section 5 is devoted to our PP-attachment disambiguation algorithm. Section 6 presents the experiments of our disambiguation method. Section 7 gives the conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "b. The road to London is long and winding.", "sec_num": null }, { "text": "The PP-attachment data with correct attachment site are available 1 . These data were extracted from Penn Treebank [7] by the IBM research group [8] . Hereafter, we call these data \"IBM data\". Some examples in the IBM data are shown in Table 1 . The data consist of 20,801 training and 3,097 test tuples. In addition, a development set of 4,039 tuples is provided. Various baselines and upper bounds of PP-Attachment disambiguation are shown in Table 2 . All the accuracies except the human performances are on the IBM data. The human performances were reported by [8] .", "cite_spans": [ { "start": 115, "end": 118, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 145, "end": 148, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 565, "end": 568, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 236, "end": 243, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 445, "end": 452, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Tagged Data for PP-Attachment", "sec_num": "2" }, { "text": "There have been lots of supervised approaches for PP-attachment disambiguation. Most of them used the IBM data for their training and test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Ratnaphakhi et al. proposed a maximum entropy model considering words and semantic classes of quadruples, and performed with 81.6% accuracy [8] . Brill and Resnik presented a transformation-based learning method [9] . They reported 81.8% accuracy, but they did not use the IBM data 2 . Collins and Brooks used a probabilistic model with backing-off to smooth the probabilities of unseen events, and its accuracy was 84.5% [10] . Stetina and Nagao used decision trees combined with a semantic dictionary [11] . They achieved 88.1% accuracy, which is approaching the human accuracy of 88.2%. This great performance is presumably indebted to the manually constructed semantic dictionary, which can be regarded as a part of world knowledge. Zavrel et al. employed a nearest-neighbor method, and its accuracy was 84.4% [12] . Abney et al. proposed a boosting approach, and yielded 84.6% accuracy [13] . Vanschoenwinkel and Manderick introduced a kernel method into PP-attachment disam-biguation, and attained 84.8% accuracy [14] . Zhao and Lin proposed a nearestneighbor method with contextually similar words learned from large raw corpus [1] . They achieved 86.5% accuracy, which is the best performance among previous methods for PP-attachment disambiguation without manually constructed knowledge bases.", "cite_spans": [ { "start": 140, "end": 143, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 212, "end": 215, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 422, "end": 426, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 503, "end": 507, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 814, "end": 818, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 891, "end": 895, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 1019, "end": 1023, "text": "[14]", "ref_id": "BIBREF13" }, { "start": 1135, "end": 1138, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "There have been several unsupervised methods for PP-attachment disambiguation. Hindle and Rooth extracted over 200K (v, n 1 , p) triples with ambiguous attachment sites from 13M words of AP news stories [15] . Their disambiguation method used lexical association score, and performed at 75.8% accuracy on their own data set. Ratnaparkhi collected 910K unique unambiguous triples (v, p, n 2 ) or (n 1 , p, n 2 ) from 970K sentences of Wall Street Journal, and proposed a probabilistic model based on cooccurrence values calculated from the collected data [4] . He reported 81.9% accuracy. As previously mentioned, the accuracy was possibly lowered by the inaccurate (69% accuracy) extracted examples. Pantel and Lin extracted ambiguous 8,900K quadruples and unambiguous 4,400K triples from 125M word newspaper corpus [5] . They utilized scores based on cooccurrence values, and resulted in 84.3% accuracy. The accuracy of the extracted unambiguous triples are unknown, but depends on the accuracy of their parser.", "cite_spans": [ { "start": 203, "end": 207, "text": "[15]", "ref_id": "BIBREF14" }, { "start": 554, "end": 557, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 816, "end": 819, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "There is a combined method of supervised and unsupervised approaches. Volk combined supervised and unsupervised methods for PP-attachment disambiguation for German [16] . He extracted triples that are possibly unambiguous from 5.5M words of a science magazine corpus, but these triples were not truly unambiguous. His unsupervised method is based on cooccurrence probabilities learned from the extracted triples. His supervised method adopted the backedoff model by Collins and Brooks. This model is learned the model from 5,803 quadruples. Its accuracy on a test set of 4,469 quadruples was 73.98%, and was boosted to 80.98% by the unsupervised cooccurrence scores. However, his work was constrained by the availability of only a small tagged corpus, and thus it is unknown whether such an improvement can be achieved if a larger size of a tagged set like the IBM data is available.", "cite_spans": [ { "start": 164, "end": 168, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "We acquire lexical preferences that are useful for PP-attachment disambiguation from a raw corpus. As such lexical preferences, cooccurrence statistics between the verb and the prepositional phrase or the noun and the prepositional phrase are used. These cooccurrence statistics can be obtained from a large raw corpus, but the simple use of such a raw corpus possibly produces unreliable statistics. We extract only truly unambiguous examples from a huge raw corpus to acquire precise preference statistics. This section first mentions the raw corpus, and then describes how to extract truly unambiguous examples. Finally, we explain our calculation method of the lexical preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acquiring Precise Lexical Preferences from Raw Corpus", "sec_num": "4" }, { "text": "In our approach, a large volume of raw corpus is required. We extracted raw corpus from 200M Web pages that had been collected by a Web crawler for a month [17] . To obtain the raw corpus, each Web page is processed by the following tools:", "cite_spans": [ { "start": 156, "end": 160, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Raw Corpus", "sec_num": "4.1" }, { "text": "Sentences are extracted from each Web page by a simple HTML parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "sentence extracting", "sec_num": "1." }, { "text": "Sentences are tokenized by a simple tokenizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "tokenizing", "sec_num": "2." }, { "text": "Tokenized sentences are given part-of-speech tags by Brill tagger [18] .", "cite_spans": [ { "start": 66, "end": 70, "text": "[18]", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "part-of-speech tagging", "sec_num": "3." }, { "text": "Tagged sentences are chunked by YamCha chunker [19] .", "cite_spans": [ { "start": 47, "end": 51, "text": "[19]", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "chunking", "sec_num": "4." }, { "text": "By the above procedure, we acquired 1,300M chunked sentences, which consist of 21G words, from the 200M Web pages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "chunking", "sec_num": "4." }, { "text": "Unambiguous examples are extracted from the chunked sentences. Our heuristics to extract truly unambiguous examples were decided in the light of the following two types of unambiguous examples in [6] .", "cite_spans": [ { "start": 196, "end": 199, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Extraction of Unambiguous Examples", "sec_num": "4.2" }, { "text": "(3) a. She sent him into the nursery to gather up his toys. b. The road to London is long and winding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Unambiguous Examples", "sec_num": "4.2" }, { "text": "The prepositional phrase \"into the nursery\" in (3a) must attach to the verb \"sent\", because attachment to a pronoun like \"him\" is not possible. The prepositional phrase \"to London\" in (3b) must attach to the noun \"road\", because there are no preceding possible heads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Unambiguous Examples", "sec_num": "4.2" }, { "text": "We use the following two heuristics to extract unambiguous examples like the above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Unambiguous Examples", "sec_num": "4.2" }, { "text": "-To extract an unambiguous triple (v, p, n 2 ) like (3a), a verb followed by a pronoun and a prepositional phrase is extracted. -To extract an unambiguous triple (n 1 , p, n 2 ) like (3b), a noun phrase followed by a prepositional phrase at the beginning of a sentence is extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Unambiguous Examples", "sec_num": "4.2" }, { "text": "The extracted examples are processed in the following way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "-For verbs (v):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 Verbs are reduced to their lemma. -For nouns (n 1 , n 2 ):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 4-digit numbers are replaced with .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 All other strings of numbers were replaced with .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 All words at the beginning of a sentence are converted into lower case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 All words starting with a capital letter followed by one or more lower case letters were replaced with . \u2022 All other words are reduced to their singular form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "-For prepositions (p):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "\u2022 Prepositions are converted into lower case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "As a result, 21M (v, p, n 2 ) triples and 147M (n, p, n 2 ) triples,in total 168M triples, were acquired.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing of Extracted Examples", "sec_num": "4.3" }, { "text": "From the extracted truly unambiguous examples, lexical preferences for PPattachment are calculated. As the lexical preferences, pointwise mutual information between v and \"p n 2 \" is calculated from cooccurrence counts of v and \"p n 2 \" as follows 3 :", "cite_spans": [ { "start": 248, "end": 249, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(v, pn 2 ) = log f (v,pn2) N f (v) N f (pn2) N", "eq_num": "(1)" } ], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "where N denotes the total number of the extracted examples (168M), f (v) and f (pn 2 ) is the frequency of v and \"p n 2 \", respectively, and f (v, pn 2 ) is the cooccurrence frequency of v and pn 2 . Similarly, pointwise mutual information between n 1 and \"p n 2 \" is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(n 1 , pn 2 ) = log f (n1,pn2) N f (n1) N f (pn2) N", "eq_num": "(2)" } ], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "The preference scores ignoring n 2 are also calculated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(v, p) = log f (v,p) N f (v) N f (p) N (3) I(n 1 , p) = log f (n1,p) N f (n1) N f (p) N", "eq_num": "(4)" } ], "section": "Calculation of Lexical Preferences for PP-Attachment", "sec_num": "4.4" }, { "text": "Our method for resolving PP-attachment ambiguity takes a quadruple (v, n 1 , p, n 2 ) as input, and classifies it as V or N. The class V means that the prepositional phrase \"p n 2 \" modifies the verb v. The class N means that the prepositional phrase modifies the noun n 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP-Attachment Disambiguation Method", "sec_num": "5" }, { "text": "To solve this binary classification task, we employ Support Vector Machines (SVMs), which have been well-known for their good generalization performance [20] .", "cite_spans": [ { "start": 153, "end": 157, "text": "[20]", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "PP-Attachment Disambiguation Method", "sec_num": "5" }, { "text": "We consider the following features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP-Attachment Disambiguation Method", "sec_num": "5" }, { "text": "-LEX: word of each quadruple To reduce sparse data problems, all verbs and nouns are pre-processed using the method stated in Section 4.3. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP-Attachment Disambiguation Method", "sec_num": "5" }, { "text": "-POS: part-of-speech information of v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PP-Attachment Disambiguation Method", "sec_num": "5" }, { "text": "We conducted experiments on the IBM data. As an SVM implementation, we employed SVM light [21] . To determine parameters of SVM light , we run our method on the development data set of the IBM data. As the result, parameter j, which is used to make much account of training errors on either class [22] , is set to 0.65, and 3-degree polynomial kernel is chosen. Table 3 shows the experimental results for PP-attachment disambiguation. For comparison, we conducted several experiments with different feature combinations in addition to our pro- . \"LEX+POS\" model was a little worse than \"LEX\", but \"LEX+POS+LP\" was better than \"LEX+LP\" (and also \"POS+LP\" was better than \"LP\"). From these results, we can see that \"LP\" worked effectively, and the combination of \"LEX+POS+LP\" was very effective. Table 4 shows the precision and recall of \"LEX+POS+LP\" model for each class (N and V). Table 5 shows the accuracies achieved by previous methods. Our performance is higher than any other previous methods except [11] . The method of Stetina and Nagao employed a manually constructed sense dictionary, and this conduces to good performance. Figure 1 shows the learning curve of \"LEX\" and \"LEX+POS+LP\" models while changing the number of tagged data. When using all the training data, \"LEX+POS+LP\" was better than \"LEX\"by approximately 2%. Under the condition of small data set, \"LEX+POS+LP\" was better than \"LEX\"by approximately 5%. In this situation, in particular, the lexical preferences worked more effectively. Figure 2 shows the learning curve of \"LEX+POS+LP\" model while changing the number of used unambiguous examples. The accuracy rises rapidly by 10M unambiguous examples, and then drops once, but after that rises slightly. The best score 87.28% was achieved when using 77M unambiguous examples. ", "cite_spans": [ { "start": 90, "end": 94, "text": "[21]", "ref_id": "BIBREF20" }, { "start": 297, "end": 301, "text": "[22]", "ref_id": "BIBREF21" }, { "start": 1005, "end": 1009, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 362, "end": 369, "text": "Table 3", "ref_id": null }, { "start": 794, "end": 801, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 881, "end": 888, "text": "Table 5", "ref_id": null }, { "start": 1133, "end": 1141, "text": "Figure 1", "ref_id": null }, { "start": 1508, "end": 1516, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Discussions", "sec_num": "6" }, { "text": "This paper has presented a corpus-based method for PP-attachment disambiguation. Our approach utilizes precise lexical preferences learned from a gigantic volume of truly unambiguous examples in raw corpus. Attachment decisions are made using a machine learning method that incorporates these lexical preferences. Our experiments indicated that the precise lexical preferences worked effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In the future, we will investigate useful contextual features for PPattachment, because human accuracy improves by around 5% when they see more than just a quadruple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Available at ftp://ftp.cis.upenn.edu/pub/adwait/PPattachData/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The accuracy on the IBM data was 81.9%[10].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As in previous work, simple probability ratios can be used, but a preliminary experiment on the development set shows their accuracy is worse than the mutual information by approximately 1%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Prof. Kenjiro Taura for allowing us to use an enormous volume of Web corpus. We also would like to express our thanks to Tomohide Shibata for his constructive and fruitful discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A nearest-neighbor method for resolving pp-attachment ambiguity", "authors": [ { "first": "S", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 1st International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "428--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, S., Lin, D.: A nearest-neighbor method for resolving pp-attachment ambigu- ity. In: Proceedings of the 1st International Joint Conference on Natural Language Processing. (2004) 428-434", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M.: Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania (1999)", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 1st Meeting of the North American Chapter", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E.: A maximum-entropy-inspired parser. In: Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics. (2000) 132-139", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical models for unsupervised prepositional phrase attachment", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1079--1085", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, A.: Statistical models for unsupervised prepositional phrase attach- ment. In: Proceedings of the 17th International Conference on Computational Linguistics. (1998) 1079-1085", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An unsupervised approach to prepositional phrase attachment using contextually similar words", "authors": [ { "first": "P", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "101--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pantel, P., Lin, D.: An unsupervised approach to prepositional phrase attachment using contextually similar words. In: Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. (2000) 101-108", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C., Sch\u00fctze, H.: Foundations of Statistical Natural Language Processing. MIT Press (1999)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Building a large annotated corpus of English: the Penn Treebank", "authors": [ { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcus, M., Santorini, B., Marcinkiewicz, M.: Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19 (1994) 313-330", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A maximum entropy model for prepositional phrase attachment", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "J", "middle": [], "last": "Reynar", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the ARPA Human Language Technology Workshop", "volume": "", "issue": "", "pages": "250--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, A., Reynar, J., Roukos, S.: A maximum entropy model for preposi- tional phrase attachment. In: Proceedings of the ARPA Human Language Tech- nology Workshop. (1994) 250-255", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A rule-based approach to prepositional phrase attachment disambiguation", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1198--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, E., Resnik, P.: A rule-based approach to prepositional phrase attachment disambiguation. In: Proceedings of the 15th International Conference on Compu- tational Linguistics. (1994) 1198-1204", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Prepositional phrase attachment through a backed-off model", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "J", "middle": [], "last": "Brooks", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 3rd Workhop on Very Large Corpora", "volume": "", "issue": "", "pages": "27--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M., Brooks, J.: Prepositional phrase attachment through a backed-off model. In: Proceedings of the 3rd Workhop on Very Large Corpora. (1995) 27-38", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Corpus based pp attachment ambiguity resolution with a semantic dictionary", "authors": [ { "first": "J", "middle": [], "last": "Stetina", "suffix": "" }, { "first": "M", "middle": [], "last": "Nagao", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th Workhop on Very Large Corpora", "volume": "", "issue": "", "pages": "66--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stetina, J., Nagao, M.: Corpus based pp attachment ambiguity resolution with a semantic dictionary. In: Proceedings of the 5th Workhop on Very Large Corpora. (1997) 66-80", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Resolving pp attachment ambiguities with memory-based learning", "authors": [ { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "J", "middle": [], "last": "Veenstra", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Workshop on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "136--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zavrel, J., Daelemans, W., Veenstra, J.: Resolving pp attachment ambiguities with memory-based learning. In: Proceedings of the Workshop on Computational Natural Language Learning. (1997) 136-144", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Boosting applied to tagging and pp attachment", "authors": [ { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "R", "middle": [], "last": "Schapire", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1999, "venue": "Proceedings of 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, S., Schapire, R., Singer, Y.: Boosting applied to tagging and pp attach- ment. In: Proceedings of 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. (1999) 38-45", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A weighted polynomial information gain kernel for resolving pp attachment ambiguities with support vector machines", "authors": [ { "first": "B", "middle": [], "last": "Vanschoenwinkel", "suffix": "" }, { "first": "B", "middle": [], "last": "Manderick", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "133--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanschoenwinkel, B., Manderick, B.: A weighted polynomial information gain kernel for resolving pp attachment ambiguities with support vector machines. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence. (2003) 133-138", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Structural ambiguity and lexical relations", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "M", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "103--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D., Rooth, M.: Structural ambiguity and lexical relations. Computational Linguistics 19 (1993) 103-120", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combining unsupervised and supervised methods for pp attachment disambiguation", "authors": [ { "first": "M", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1065--1071", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volk, M.: Combining unsupervised and supervised methods for pp attachment disambiguation. In: Proceedings of the 19th International Conference on Compu- tational Linguistics. (2002) 1065-1071", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "World wide web crawler", "authors": [ { "first": "T", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "H", "middle": [], "last": "Soonsang", "suffix": "" }, { "first": "K", "middle": [], "last": "Taura", "suffix": "" }, { "first": "A", "middle": [], "last": "Yonezawa", "suffix": "" } ], "year": 2002, "venue": "Poster Proceedings of the 11th International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takahashi, T., Soonsang, H., Taura, K., Yonezawa, A.: World wide web crawler. In: Poster Proceedings of the 11th International World Wide Web Conference. (2002)", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging", "authors": [ { "first": "E", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "", "pages": "543--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brill, E.: Transformation-based error-driven learning and natural language process- ing: A case study in part-of-speech tagging. Computational Linguistics 21 (1995) 543-565", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Chunking with support vector machines", "authors": [ { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kudo, T., Matsumoto, Y.: Chunking with support vector machines. In: Proceed- ings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics. (2001) 192-199", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Nature of Statistical Learning Theory", "authors": [ { "first": "V", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vapnik, V.: The Nature of Statistical Learning Theory. Springer (1995)", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "In: Making Large-Scale Support Vector Machine Learning Practical", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Advances in Kernel Methods -Support Vector Learning", "volume": "", "issue": "", "pages": "169--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachims, T.: 11. In: Making Large-Scale Support Vector Machine Learning Prac- tical, in Advances in Kernel Methods -Support Vector Learning. MIT Press (1999) 169-184", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Combining statistical learning with a knowledge-based approach -a case study in intensive care monitoring", "authors": [ { "first": "K", "middle": [], "last": "Morik", "suffix": "" }, { "first": "P", "middle": [], "last": "Brockhausen", "suffix": "" }, { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 16th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "268--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morik, K., Brockhausen, P., Joachims, T.: Combining statistical learning with a knowledge-based approach -a case study in intensive care monitoring. In: Proceed- ings of the 16th International Conference on Machine Learning. (1999) 268-277", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Learning Curve of PP-Attachment Disambiguation Learning Curve of PP-Attachment Disambiguation while changing the number of used unambiguous examples", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "html": null, "text": "Some Examples of the IBM data", "num": null, "content": "
vn 1pn 2attach
joinboardasdirectorV
ischairman ofN.V.N
usingcrocidolite infiltersV
bringattention toproblemV
isasbestosinproductN
makingpaperfor filtersN
including threewith cancerN
Table 2. Various Baselines and Upper Bounds of PP-Attachment Disambiguation
methodaccuracy
always N59.0%
N if p is \"of\"; otherwise V70.4%
most likely for each preposition 72.2%
average human (only quadruple) 88.2%
average human (whole sentence) 93.2%
", "type_str": "table" }, "TABREF2": { "html": null, "text": "Precision and Recall for Each Attachment Site (\"LEX+POS+LP\" model)", "num": null, "content": "
classprecisionrecall
V 1067/1258 (84.82%) 1067/1271 (83.95%)
N 1635/1839 (88.91%) 1635/1826 (89.54%)
Table 5. PP-Attachment Accuracies of Previous Work
method accuracy
our methodSVM 87.25%
supervised
Ratnaphakhi et al., 1994ME81.6%
Brill and Resnik, 1994TBL81.9%
Collins and Brooks, 1995back-off 84.5%
Zavrel et al., 1997NN84.4%
Stetina and Nagao, 1997DT88.1%
Abney et al., 1999boosting 84.6%
Vanschoenwinkel and Manderick, 2003 SVM84.8%
Zhao and Lin,NN86.5%
unsupervised
Ratnaparkhi, 1998-81.9%
Pantel and Lin, 2000-84.3%
ME: Maximum Entropy, TBL: Transformation-Based Learning,
DT: Decision Tree, NN: Nearest Neighbor
configurations (McNemar's test; p < 0.05)
", "type_str": "table" } } } }