{ "paper_id": "W01-0705", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:01:29.885267Z" }, "title": "Automatic Verb Classi cation Using Multilingual Resources", "authors": [ { "first": "Vivian", "middle": [], "last": "Tsang", "suffix": "", "affiliation": {}, "email": "vyctsang@cs.toronto.edu" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "", "affiliation": {}, "email": "suzanne@cs.toronto.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose the use of multilingual corpora in the automatic classi cation of verbs. We extend the work of Merlo and Stevenson, 2001, in which statistics over simple syntactic features extracted from textual corpora were used to train an automatic classi er for three lexical semantic classes of English verbs. We h ypothesize that some lexical semantic features that are di cult to detect super cially in English may manifest themselves as easily extractable surface syntactic features in another language. Our experimental results combining English and Chinese features show that a small bilingual corpus may provide a useful alternative to using a large monolingual corpus for verb classi cation.", "pdf_parse": { "paper_id": "W01-0705", "_pdf_hash": "", "abstract": [ { "text": "We propose the use of multilingual corpora in the automatic classi cation of verbs. We extend the work of Merlo and Stevenson, 2001, in which statistics over simple syntactic features extracted from textual corpora were used to train an automatic classi er for three lexical semantic classes of English verbs. We h ypothesize that some lexical semantic features that are di cult to detect super cially in English may manifest themselves as easily extractable surface syntactic features in another language. Our experimental results combining English and Chinese features show that a small bilingual corpus may provide a useful alternative to using a large monolingual corpus for verb classi cation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recently, a n umber of researchers have devised corpus-based approaches for automatically learning the lexical semantic class of verbs e.g., McCarthy and Korhonen, 1998; Lapata and Brew, 1999; Schulte im Walde, 2000; Merlo and Stevenson, 2001 . Automatic verb classi cation yields important potential bene ts for the creation of lexical resources. Lexical semantic classes incorporate both syntactic and semantic information about verbs, such as the general sense of the verb e.g., change-of-state or manner-of-motion and the allowable mapping of verbal arguments to syntactic positions e.g., whether an experiencer argument can appear as the subject or the object of the verb Levin, 1993. By automatically learning the assignment o f v erbs to lexical semantic classes, each v erb inherits a great deal of information about its possible usage in an NLP system, without that information hav-ing to be explicitly hand-coded.", "cite_spans": [ { "start": 141, "end": 169, "text": "McCarthy and Korhonen, 1998;", "ref_id": "BIBREF7" }, { "start": 170, "end": 192, "text": "Lapata and Brew, 1999;", "ref_id": "BIBREF6" }, { "start": 193, "end": 216, "text": "Schulte im Walde, 2000;", "ref_id": "BIBREF13" }, { "start": 217, "end": 242, "text": "Merlo and Stevenson, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we explore the use of multilingual corpora in the automatic learning of verb classi cation. We extend the work of Merlo and Stevenson, 2001 , in which statistics over simple syntactic features extracted from syntactically annotated corpora were used to train an automatic classi er for a set of sample lexical semantic classes of English verbs. This work had two potential limitations: rst, only a small number ve of syntactic features that correlate with semantic class were proposed; second, a very large corpus was needed 65M words to extract su ciently discriminating statistics.", "cite_spans": [ { "start": 129, "end": 154, "text": "Merlo and Stevenson, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We address both of these issues in the current study by exploiting the use of a parallel English-Chinese corpus. Our motivating hypothesis is that some lexical semantic features that are di cult to detect super cially in English may manifest themselves as surface syntactic features in another language. If this is indeed the case, then we should be able to augment the initial set of English features with features over the translated verbs in the other language in our case, Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our hypothesis that a non-English verb feature set can be useful in English verb classi cation is inspired by SLA Second Language Acquisition research on learning English verbs. As the name suggests, SLA research studies how h umans acquire a second language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Transfer e ects\"|the impact of one's native language when learning a second language Ellis, 1997|are of particular interest to us. Recent research has shown that properties of a non-English native lexicon can in uence human learning of English verb class distinctions e.g., Helms-Park, 1997; Inagaki, 1997; Ju s, 2000 . Carrying this idea of transfer\" over to the machine learning setting, we h ypothesize that features from a second language may provide an additional source of information that complements the English features, making it possible that a smaller corpus a bitext can be a useful alternative to using a large monolingual corpus for verb classi cation.", "cite_spans": [ { "start": 274, "end": 291, "text": "Helms-Park, 1997;", "ref_id": "BIBREF2" }, { "start": 292, "end": 306, "text": "Inagaki, 1997;", "ref_id": "BIBREF4" }, { "start": 307, "end": 317, "text": "Ju s, 2000", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Features Merlo and Stevenson 2001 tested their approach on the major classes of optionally intransitive v erbs in English. All the classes allow the same subcategorizations transitive and intransitive, entailing that they cannot be discriminated by subcategorization alone. Thus, successful classi cation demonstrates the induction of semantic information from syntactic features. In our work, we focus on two of these classes, the change-of-state verbs, such a s open, and the verbs of creation and transformation, such as perform classes 45 and 26, respectively, from Levin, 1993. Both classes are optionally intransitive, but di er in the alternation between the transitive and intransitive forms. The transitive form of a change-of-state verb is a causative form of the intransitive the door opened the cat opened the door, while the transitive intransitive alternates of a creation transformation verb arise from simple object optionality the actors performed the skit the actors performed.", "cite_spans": [ { "start": 9, "end": 33, "text": "Merlo and Stevenson 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "The Verb Classes and English", "sec_num": "2" }, { "text": "Merlo and Stevenson 2001 used 5 numeric features that encoded summary statistics over the usage of each v erb across the corpus 65M words of Wall Street Journal, WSJ. The features captured subcategorization and aspectual frequencies of transitivity, passive v oice, and VBN POS tag, as well as statistics that approximated thematic properties of NP arguments animacy and causativity from simple syntactic indicators. We adopt these same features in our work, and augment them with Chinese features as described next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Verb Classes and English", "sec_num": "2" }, { "text": "We selected the following Chinese features for our task, based on the properties of the changeof-state and creation transformation classes. Each n umbered item refers to a collection of related features. We describe how w e expect each t ype of feature to vary across the two classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Features", "sec_num": "3" }, { "text": "1. Chinese POS tags for Verbs: W e used the CKIP Chinese Knowledge Information Processing Group POS-tagger to assign one of 15 verb tags to each v erb. Additionally, each of these tags can be mapped into the UPenn Chinese Treebank standard Fei Xia, email communication, which c haracterizes each v erb as active\" or stative\". We note that change-of-state verbs are more likely to be adjectivized than creation transformation verbs; furthermore, this adjectival property is not unlike the stative property in Chinese. We expect then to see the Chinese translation of English change-of-state verbs to be more likely assigned a stative v erb tag.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Features", "sec_num": "3" }, { "text": "2. Passive P articles: The adjectival nature of change-of-state verbs may also be reected in a higher proportion of passive use, since the adjectival use is a passive use. In Chinese, a passive construction is indicated by a passive particle preceding the main verb. For example, the passive sentence: This store is closed. can be translated as: Zhe4 ge4 this shang1 dian4 store bei4 passive particle guan1 bi4 closed. We t h us expect to nd that translations of change-of-state verbs have a higher frequency of occurrence with a passive particle in Chinese.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chinese Features", "sec_num": "3" }, { "text": "In Chinese, some causative sentences use an external periphrastic particle to indicate that the subject is the causal agent of the event speci ed by the verb. For example, one possible translation for I c r acked a n e gg. can be Wo3 I jiang1 made, periphrastic particle dan4 egg da3 lan4 crack. Since change-of-state verbs have a causative alternate, and creation transformation verbs do not, we expect to see a more frequent use of such particles in the translated equivalent of the change-of-state verbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Periphrastic Causative Particles:", "sec_num": "3." }, { "text": "The types of features discussed so far involve the POS tag of the translated verb, or additional syntactic particles it occurs with. We also hypothesize that the semantic class membership of an English verb may inuence its word-level translation into Chinese. That is, the sublexical component| the precise morphemic constitution of the translated Chinese verb|may re ect properties of the class of the English verb. The following features are an attempt to exploit this potential source of information:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morpheme Information:", "sec_num": "4." }, { "text": "Average number of morphemes in translated verb. Di erent categories of morphemes in translated verb. We count occurrences of all combinations of pairs of POS tags V, N, and A. Semantic speci city of translated verb. Is it semantically more speci c than the English verb, e.g., by including additional morphemes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Morpheme Information:", "sec_num": "4." }, { "text": "The four general types of features we describe above lead to 17 Chinese features in total, which w e use alone or in combination with the original 5 features proposed by Merlo and Stevenson 2001. ", "cite_spans": [ { "start": 170, "end": 195, "text": "Merlo and Stevenson 2001.", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Morpheme Information:", "sec_num": "4." }, { "text": "In our experiments, we use the Hong Kong Laws Parallel Text HKLaws from the Linguistic Data Consortium, a sentence-aligned bilingual corpus with 6.5M words of English, and 9M characters of Chinese. We tagged the Chinese portion of the corpus using the CKIP tagger, and the English portion using Ratnaparkhi's tagger Ratnaparkhi, 1996 . Note that the English portion of HKLaws is about 10 of the size of the corpus used by Merlo and Stevenson 2001 in their original experiments, so we are restricted to a much smaller source of data. Given the relatively small size of our corpus, and its narrow domain, we w ere only able to nd a sample of 16 change-of-state and 16 creation transformation verbs in English of sucient frequency; see the appendix for the list of verbs used. 1 The English features for these 32 verbs were automatically extracted using regular expressions over the tagged English portion of the corpus.", "cite_spans": [ { "start": 316, "end": 333, "text": "Ratnaparkhi, 1996", "ref_id": "BIBREF11" }, { "start": 422, "end": 446, "text": "Merlo and Stevenson 2001", "ref_id": "BIBREF10" }, { "start": 774, "end": 775, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The Chinese features were calculated as follows. For each English verb, we manually determined the Chinese translation in each aligned sentence to yield a collection of all aligned translations of the verb. This is the aligned translation set.\" We also extracted all occurrences of the Chinese verbs in the aligned translation set across the corpus, yielding the unaligned translation set\"|i.e., the possible Chinese translations of an English target verb even when they did not occur as the translation of that verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The required counts for the Chinese features were collected for these verbs partly automatically Chinese Verb POS tags, Passive Particles, Periphrastic Particles, and Morpheme Length and partly by hand Semantic Specicity and Morpheme POS combinations. The value of a Chinese feature for a given verb is the normalized frequency of occurrence of the feature across all occurrences of that verb in the given translation set. The resulting frequencies for the aligned translation set form the aligned dataset, and those for the unaligned translation set form the unaligned dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The motivation for collecting unaligned data is to examine an alternative method for combining multilingual data. Note that parallel corpora, especially those that are sentencealigned, are di cult to construct. Most parallel corpora we found are considerably smaller than some of the more popular monolingual ones. Given that more monolingual corpora are available, we w ant to explore the possibility of using non-parallel texts from multiple languages hence, necessarily unaligned data, rather than solely looking at bilingual corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "In order to compare our results to the monolingual method on a large corpus as in Merlo and Stevenson, 2001 , we also collected the 5 English features for our verbs from the 65M word WSJ corpus. As a result, we h a ve a total of four data sets: English HKLaws dataset, English WSJ dataset, aligned Chinese HK-Laws dataset, and unaligned Chinese HKLaws dataset. This allows us to look at four datasets individually the two English and two Chinese sets, and to pair up the English and Chinese datasets in four di erent w ays each English set paired with each Chinese set.", "cite_spans": [ { "start": 82, "end": 107, "text": "Merlo and Stevenson, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The data for each of our machine learning experiments consists of a vector of the relevant English and or Chinese features for each v erb:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "Template: verb, Eng. Feats., Chi. Feats., class Example: altered, 0.04, . . . ,1 ,c hange-of-state Combining all the English and Chinese features yields a total of 22 features. We use the resulting vectors as the training data for a classi er using the same decision tree algorithm as in Merlo and Stevenson, 2001 C5.0, http: www.rulequest.com. We used both 8fold cross-validation repeated 50 times and leave-one-out training methodologies for our experiments. 2 For our 8-fold cross-validation experiments, we empirically tested the tuning options available in C5.0. Except for the tree pruning percentage, we found the available options o er little to no improvements over the default settings. We set the pruning factor to 30 for the best overall performance over a variety of different combinations of features. According to the manual, the default is 25. A larger pruning factor results in less pruning in the decision tree.", "cite_spans": [ { "start": 288, "end": 313, "text": "Merlo and Stevenson, 2001", "ref_id": "BIBREF10" }, { "start": 461, "end": 462, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The cross-validation experiments train on a large number of random subsets of the data, for which w e report average accuracy and standard error. The goal of the cross-validation experiments is to evaluate the contribution of different features to learning, and if possible nd the best feature combinations. To do so, we varied the precise set of features used in each experiment. Since we h a ve a total of 17 features, performing an exhaustive search o f 2 17 131 thousand experiments is nearly impossible. Instead, we analysed the performance of individual monolingual features alone, and their performance when combined with the features from the other language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "The leave-one-out experiments complement the cross-validation methodology: there are a small number of tests, but we h a ve the result of classifying each v erb rather than average performance data on random subsets. Our goal for the leave-one-out experiments is to compare the precision and recall across the two classes. A feature is selected for the leave-oneout experiments if it contributed highly to performance in the cross-validation experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Materials and Method", "sec_num": "4" }, { "text": "We report here the key results of our crossvalidation and leave-one-out experiments. For additional results and details, see Tsang, 2001 . Since our task is a two-way classication with equal-sized classes, the chance accuracy is 50. Although the theoretical maximum accuracy is 100, it is worth noting that, for their three-way v erb classi cation task, Merlo and Stevenson, 2001 experimentally determined a best performance of 87 among a group of human experts, indicating that a more realistic upper-bound for the machine-learning task falls well below 100.", "cite_spans": [ { "start": 125, "end": 136, "text": "Tsang, 2001", "ref_id": "BIBREF15" }, { "start": 354, "end": 379, "text": "Merlo and Stevenson, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Our cross-validation experiments fall into three general sets. In each of these types of experiments, we use various combinations of the datasets English HKLaws, English WSJ, Chinese aligned and unaligned, as explained in detail below. First, we analysed the contribution of the English features to learning by testing all English features together, and all English features individually. These tests form our baseline results using monolingual English data. Second, we similarly analysed the contribution of the Chinese features to learning by testing all Chinese features together and all Chinese features individually. Finally, since our overall goal is to observe possible information gain by augmenting English data with non-English data, we present results in which Table 1 shows the results of our experiments evaluating the English features. Using the HKLaws dataset, English features alone achieved a best performance of no better than chance 49.5 accuracy, SE 0.5. Using the WSJ dataset, all the English features together achieved an accuracy of 66.3 SE 0.6, although the best performance was achieved by a single English feature alone animacy, with an accuracy of 72.5 SE 0.4. We note then that the English HKLaws dataset alone is not su ciently informative for the classi cation task. The best accuracy achieved with the WSJ data, of 72.5, will serve as our monolingual baseline|i.e., the performance we w ould like to beat with our multilingual data.", "cite_spans": [], "ref_spans": [ { "start": 772, "end": 779, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "Next, we turn to our evaluation of Chinese features alone; the results are reported in Table 2. We see that, in contrast to the English HKLaws dataset, the Chinese features alone performed very well. For the aligned and unaligned Chinese HKLaws datasets, using all Chinese features achieved an accuracy of 75.4 and 74.1, respectively , as shown in line 1 of the table; the two results are not signi cantly di erent at the p 0.05 level. Using the verb POS tags alone in the aligned set|e.g., the UPenn VA stative tag, in line 2 of the table| achieves comparable performance of 75.1, SE 0.4 again, not statistically di erent from the rst two results. The best single feature in the unaligned dataset is also one of the verb tags, achieving only a slightly lower accuracy of 71.5, SE 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "Thus, we h a ve the surprising result that Chinese features alone, from a fairly small dataset, are far superior to the English features from the same bilingual corpus 75.4 versus 49.5 best accuracy respectively. In fact, the Chinese features alone outperform the monolingual baseline of 72.5, which uses English features from a much larger corpus. The di erence between the best English-only and best Chinese-only accuracies is small, but statistically signi cant at the p 0.05 level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "Finally, w e w ant to look at the performance of all English features from either corpus augmented with selected Chinese features aligned or unaligned, from the HKLaws corpus. The results are shown in Table 3 . In general, combining English with Chinese features performed very well. Using English HKLaws data, the best feature combination using the Chinese CKIP POS tags achieved a performance of 77.9 accuracy SE 0.8, for a reduction of 56 of the baseline error rate. See line 1 of Table 3 ; the results for aligned and unaligned data are not signi cantly di erent. Note that, although numerically larger, these results do not di er signi cantly from the Chinese-only results. We conclude that for the English HK-Laws dataset, the Chinese features greatly help the English features, and the English features do not hurt performance of the Chinese features.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 3", "ref_id": null }, { "start": 484, "end": 491, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "We also augmented the English WSJ dataset with the Chinese HKLaws dataset; the best accuracy is at 80.6 SE 0.6, for an error Table 4 : F-measure F, Accuracy Acc., and Number of Errors E in the Leave-one-out Experiments. 1 = CKIP Tags; 2 = Passive P articles; 3 = Periphrastic Particles rate reduction of 61 see line 2 of Table 3 . This best performance is achieved using the UPenn VA tag in the aligned corpus, shown above to be highly useful on its own. Here, the performance of the combined dataset|using both English and Chinese features|is signicantly better than both the English monolingual baseline of 72.5, and the Chinese features alone best accuracy of 75.4 p 0.05. We conclude that combining multilingual data has a signi cant performance bene t over monolingual data from either language. In particular, in augmenting English-only data with Chinese data, we a c hieve higher accuracies than that using either the English HK-Laws subcorpus or the much larger WSJ corpus alone. On the other hand, we found that Chinese features alone achieve v ery good accuracies, close to the performance of the combined datasets, indicating that the Chinese features are highly informative in and of themselves.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 4", "ref_id": null }, { "start": 321, "end": 328, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "Finally, w e note that, although the English features from the smaller bilingual corpus were not useful in classi cation on their own, the combination of English and Chinese features from that corpus performed comparably to the combination of English WSJ features with the Chinese features. Thus, a smaller bilingual corpus may be e ectively used either alone or in combination with a larger monolingual corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8-Fold Cross-Validation", "sec_num": "5.1" }, { "text": "For the leave-one-out experiments, we only report results using English WSJ data in conjunction with the Chinese HKLaws data, since that yielded the best performance. We focus here on augmenting the English dataset with Chinese features that seem particularly promising. Recall that since the leave-one-out method yields the result of classifying each individual verb, we can further analyse the performance within and across the two classes with this multilingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Leave-One-Out Methodology", "sec_num": "5.2" }, { "text": "For these tests, we selected the three Chinese features CKIP Tags, Passive Particles, and Periphrastic Particles, because they consistently had an above-chance performance, and or improved performance when combined with other features, in the cross-validation experiments. The results are shown in Table 4 . The italicized sections highlight the feature sets with the best overall accuracies. On the left panel, showing the results with aligned Chinese data, the addition to the English features of any feature combination that includes CKIP Tags has the same best overall accuracy. On the right panel, showing the unaligned data, the addition of CKIP Tags and Passive Particles has the best overall performance. We see again that with the right feature combination, using multilingual data is superior to using English-only data.", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 305, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Leave-One-Out Methodology", "sec_num": "5.2" }, { "text": "Since we know the number of errors per class, we w ere able to calculate the precision and recall of each of the two classes as well. Due to space limitations, we only report the F-measure in Table 4 . For each class, we calculated a balanced F score as 2PR P+R, where P and R are the precision and recall. The two classes yield similar F scores in almost all cases, and the trend is not di erent from that of the overall accuracy. Observe in the italicized sections in the table the best overall performance, the F scores are larger than those in the monolingual section rst two lines of the table. We conclude that adding Chinese features to English features has a performance bene t over the monolingual features alone for both verb classes, as well as overall.", "cite_spans": [], "ref_spans": [ { "start": 192, "end": 199, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Leave-One-Out Methodology", "sec_num": "5.2" }, { "text": "Our work is the rst use of a bilingual corpusbased technique for the automatic learning of verb classi cation, though we are not the rst to utilized multilingual resources for lexical acquisition tasks generally. F or example, Siegel and McKeown, 2000 suggested the use of parallel corpora in learning the aspectual classication i.e., state or event of English verbs. Ide, 2000 and Resnik and Yarowsky, 2000 made use of parallel corpora for word sense disambiguation. That is, a parallel Englishnon-English corpus was used as a source for lexicalizing some ne-grained English senses.", "cite_spans": [ { "start": 227, "end": 251, "text": "Siegel and McKeown, 2000", "ref_id": "BIBREF14" }, { "start": 368, "end": 381, "text": "Ide, 2000 and", "ref_id": "BIBREF3" }, { "start": 382, "end": 392, "text": "Resnik and", "ref_id": "BIBREF12" }, { "start": 393, "end": 407, "text": "Yarowsky, 2000", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Other work using multilingual resources that is highly related to ours are studies by F ung 1998 and by Melamed et al. 1997; 1998 , in which a bilingual corpus was used to extract bilingual lexical entries. An important assumption is that the bilingual corpus is sentence or segment alignable, which allows for the calculation some co-occurence score between any t wo possible translations. One common theme in these papers is that, given any arbitrary tokens and some text coordinate system, the closer the two tokens' coordinates are, the more likely they are translational equivalents. Although we did not use an automatic method to nd translations of verbs, our aligned data collection technique is similar in spirit. We also make one further implication that is absent in these papers: in one subcorpus of a bitext, the distribution of the di erent senses and usages of a word should be re ected correlated in the distribution of its translations in the other subcorpus. We have suggested that some Chinese features are related to some English features; therefore, these Chinese features should also make a similar n-way distinction between the English verb classes.", "cite_spans": [ { "start": 104, "end": 124, "text": "Melamed et al. 1997;", "ref_id": "BIBREF9" }, { "start": 125, "end": 129, "text": "1998", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We conclude that the use of multilingual corpora, either alone or in combination with monolingual data, can be an e ective aid in verb classi cation. The Chinese features that worked best were the active stative POS tags, and the passive and causative particles| easily extractable features indicating properties that are di cult to detect in English using only simple syntactic counts. This supports our hypothesis that a second language that provides surface-level features complementing the available English features can extend the possible feature set for verb classi cation, allowing the use of smaller parallel corpora in place of, or in addition to, larger monolingual data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We h a ve presented some preliminary results demonstrating the bene t of using multilingual data. However, we conducted our experiments only on a small test set of 32 verbs in one language pair. To test the generality of our hypothesis, we plan to duplicate our experiments using a larger test set, and expand our investigation to other language pairs. In fact, given our success with even unaligned data, we conjecture that our approach m a y be greatly enhanced by using multiple monolingual corpora from di erent languages which di erentially express semantic features relevant t o v erb classi cation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In the set of creation transformation verbs, we include one item not from that class, but with similar syntactic behavior, the verb pack. We included this verb because we could not nd another creation transformation verb in the HKLaws corpus. We could have used another optionally intransitive noncausative class from Levin's classi cation, but wanted to focus on these two classes in order to provide maximum comparability to the ongoing work by Stevenson and Merlo, who are currently investigating these classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An 8-fold cross-validation experiment divides the data into eight parts folds and runs eight times, each time training on a di erent 7 8 of the data and testing on the remaining 1 8. We c hose 8 folds simply because it evenly divides our 32 verbs. In leave-one-out experiments, we leave out one vector for testing and use the remaining vectors for training, repeated 32 times once for each v erb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We gratefully acknowledge the nancial support of the US National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and the University o f Toronto. We thank Paola Merlo for helpful discussions on the work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Change-of-state verbs: alter, change, clear, close, compress, contract, cool, decrease, diminish, dissolve, divide, drain, ood, multiply, open, reproduce. Creation and transformation verbs: build, clean, compose, direct, hammer, knit, organise, pack, paint, perform, play, produce, recite, stitch, type, wash.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Second Language Acquisition", "authors": [], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rod Ellis. 1997. Second Language Acquisition. Oxford University Press, Oxford.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A statistical view on bilingual lexicon extraction: from parallel corpora to non-parallel corpora", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 1998, "venue": "In Lecture Notes in Arti cial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung. 1998. A statistical view on bilingual lexi- con extraction: from parallel corpora to non-parallel corpora. In Lecture Notes in Arti cial Intelligence, pages 1 17. Springer Publisher.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Building an L2 Lexicon: The Acquisition of Verb Classes Relevant to Causativization in English by Speakers of Hindi-Urdu and Vietnamese", "authors": [ { "first": "Rena", "middle": [], "last": "Helms-Park", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rena Helms-Park. 1997. Building an L2 Lexicon: The Acquisition of Verb Classes Relevant to Causativiza- tion in English by Speakers of Hindi-Urdu and Vietnamese. Ph.D. thesis, University o f T oronto, Toronto, Canada.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cross-lingual sense determination: Can it work?", "authors": [ { "first": "Nancy", "middle": [], "last": "Ide", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "", "pages": "223--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Ide. 2000. Cross-lingual sense determination: Can it work? Computers and the Humanities, 34:223 234.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Japanese and Chinese learners' acquisition of the narrow-range rules for the dative alternation in English", "authors": [ { "first": "Shunji", "middle": [], "last": "Inagaki", "suffix": "" } ], "year": 1997, "venue": "Language Learning", "volume": "474", "issue": "", "pages": "637--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shunji Inagaki. 1997. Japanese and Chinese learn- ers' acquisition of the narrow-range rules for the dative alternation in English. Language Learning, 474:637 669.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An overview of the second language acquisition of links between verb semantics and morpho-syntax", "authors": [ { "first": "Alan", "middle": [], "last": "Ju", "suffix": "" } ], "year": 2000, "venue": "Second Language Acquisition and Linguistic Theory", "volume": "170", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ju s. 2000. An overview of the second lan- guage acquisition of links between verb semantics and morpho-syntax. In John Archibald, editor, Sec- ond Language Acquisition and Linguistic Theory, pages 170 179. Blackwell Publishers.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using subcategorization to resolve v erb class ambiguity. I n Proceedings of Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "authors": [ { "first": "Maria", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brew", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "266--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Lapata and Chris Brew. 1999. Using subcate- gorization to resolve v erb class ambiguity. I n Pro- ceedings of Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 266 274, College Park, MD. Beth Levin. 1993. English Verb Classes and Alterna- tions : A Preliminary Investigation. University o f Chicago, Chicago.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Detecting verbal participation in diathesis alternations", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Anna-Leena", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 36th Annual Meeting of the ACL and the 17th International Conference on Computational Linguistics COLING-ACL 1998", "volume": "", "issue": "", "pages": "1493--1495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy and Anna-Leena Korhonen. 1998. Detecting verbal participation in diathesis alterna- tions. In Proceedings of the 36th Annual Meet- ing of the ACL and the 17th International Confer- ence on Computational Linguistics COLING-ACL 1998, pages 1493 1495, Montreal, Canada.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatic construction of Chinese-English translation lexicons", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed and Mitchell P. Marcus. 1998. Au- tomatic construction of Chinese-English translation lexicons. Technical Report 98-28, University o f Pennsylvania, Philadelphia, PA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A portable algorithm for mapping bitext correspondence", "authors": [ { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Melamed. 1997. A portable algorithm for map- ping bitext correspondence. In Proceedings of the 35th Conference of the Association for Computa- tional Linguistics, Madrid, Spain.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic verb classi cation based on statistical distributions of argument structure. Computational Linguistics", "authors": [ { "first": "Paola", "middle": [], "last": "Merlo", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Merlo and Suzanne Stevenson. 2001. Automatic verb classi cation based on statistical distributions of argument structure. Computational Linguistics. To appear.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A maximum entropy partof-speech tagger", "authors": [ { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1996, "venue": "Proceedings of The Empirical Methods in Natural Language Processing Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy part- of-speech tagger. In Proceedings of The Empiri- cal Methods in Natural Language Processing Confer- ence, Philadelphia, PA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2000, "venue": "Natural Language Engineering", "volume": "52", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and David Yarowsky. 2000. Distinguish- ing systems and distinguishing senses: New evalua- tion methods for word sense disambiguation. Natu- ral Language Engineering, 52:113 133.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Clustering verbs semantically according to their alternation behaviour", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2000, "venue": "Proceedings of COLING 2000", "volume": "", "issue": "", "pages": "747--753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde. 2000. Clustering verbs se- mantically according to their alternation behaviour. In Proceedings of COLING 2000, pages 747 753, Saarbr ucken, Germany.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning methods to combine linguistic indicators: Improving aspectual classi cation and revealing linguistic insights", "authors": [ { "first": "Eric", "middle": [ "V" ], "last": "Siegel", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" } ], "year": 2000, "venue": "Journal of Computational Linguistics", "volume": "264", "issue": "", "pages": "595--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric V. Siegel and Kathleen R. McKeown. 2000. Learn- ing methods to combine linguistic indicators: Im- proving aspectual classi cation and revealing lin- guistic insights. Journal of Computational Linguis- tics, 264:595 628, December.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Second language information transfer in automatic verb classi cation | a preliminary investigation to be completed", "authors": [ { "first": "Vivian", "middle": [], "last": "Tsang", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivian Tsang. 2001. Second language information transfer in automatic verb classi cation | a pre- liminary investigation to be completed. Master's thesis, University o f T oronto, Toronto, Canada.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "", "num": null } } } }