{ "paper_id": "D07-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:19:48.200803Z" }, "title": "Detecting Compositionality of Verb-Object Combinations using Selectional Preferences", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Sussex Falmer", "location": { "postCode": "BN1 9QH", "settlement": "East Sussex", "country": "UK" } }, "email": "dianam@sussex.ac.uk" }, { "first": "Sriram", "middle": [], "last": "Venkatapathy", "suffix": "", "affiliation": {}, "email": "sriram@research.iiit.ac.in" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "settlement": "Philadelphia", "region": "PA", "country": "USA" } }, "email": "joshi@linc.cis.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we explore the use of selectional preferences for detecting noncompositional verb-object combinations. To characterise the arguments in a given grammatical relationship we experiment with three models of selectional preference. Two use WordNet and one uses the entries from a distributional thesaurus as classes for representation. In previous work on selectional preference acquisition, the classes used for representation are selected according to the coverage of argument tokens rather than being selected according to the coverage of argument types. In our distributional thesaurus models and one of the methods using WordNet we select classes for representing the preferences by virtue of the number of argument types that they cover, and then only tokens under these classes which are representative of the argument head data are used to estimate the probability distribution for the selectional preference model. We demonstrate a highly significant correlation between measures which use these 'typebased' selectional preferences and compositionality judgements from a data set used in previous research. The type-based models perform better than the models which use tokens for selecting the classes. Furthermore, the models which use the automatically acquired thesaurus entries produced the best results. The correlation for the thesaurus models is stronger than any of the individual features used in previous research on the same dataset.", "pdf_parse": { "paper_id": "D07-1039", "_pdf_hash": "", "abstract": [ { "text": "In this paper we explore the use of selectional preferences for detecting noncompositional verb-object combinations. To characterise the arguments in a given grammatical relationship we experiment with three models of selectional preference. Two use WordNet and one uses the entries from a distributional thesaurus as classes for representation. In previous work on selectional preference acquisition, the classes used for representation are selected according to the coverage of argument tokens rather than being selected according to the coverage of argument types. In our distributional thesaurus models and one of the methods using WordNet we select classes for representing the preferences by virtue of the number of argument types that they cover, and then only tokens under these classes which are representative of the argument head data are used to estimate the probability distribution for the selectional preference model. We demonstrate a highly significant correlation between measures which use these 'typebased' selectional preferences and compositionality judgements from a data set used in previous research. The type-based models perform better than the models which use tokens for selecting the classes. Furthermore, the models which use the automatically acquired thesaurus entries produced the best results. The correlation for the thesaurus models is stronger than any of the individual features used in previous research on the same dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Characterising the semantic behaviour of phrases in terms of compositionality has particularly attracted attention in recent years (Lin, 1999; Schone and Jurafsky, 2001; Bannard, 2002; McCarthy et al., 2003; Bannard, 2005; Venkatapathy and Joshi, 2005) . Typically the phrases are putative multiwords and noncompositionality is viewed as an important feature of many such \"words with spaces\" (Sag et al., 2002) . For applications such as paraphrasing, information extraction and translation, it is essential to take the words of non-compositional phrases together as a unit because the meaning of a phrase cannot be obtained straightforwardly from the constituent words. In this work we are investigate methods of determining semantic compositionality of verb-object 1 combinations on a continuum following previous research in this direction (McCarthy et al., 2003; Venkatapathy and Joshi, 2005) .", "cite_spans": [ { "start": 131, "end": 142, "text": "(Lin, 1999;", "ref_id": "BIBREF19" }, { "start": 143, "end": 169, "text": "Schone and Jurafsky, 2001;", "ref_id": "BIBREF26" }, { "start": 170, "end": 184, "text": "Bannard, 2002;", "ref_id": "BIBREF3" }, { "start": 185, "end": 207, "text": "McCarthy et al., 2003;", "ref_id": "BIBREF20" }, { "start": 208, "end": 222, "text": "Bannard, 2005;", "ref_id": "BIBREF4" }, { "start": 223, "end": 252, "text": "Venkatapathy and Joshi, 2005)", "ref_id": "BIBREF30" }, { "start": 392, "end": 410, "text": "(Sag et al., 2002)", "ref_id": "BIBREF25" }, { "start": 843, "end": 866, "text": "(McCarthy et al., 2003;", "ref_id": "BIBREF20" }, { "start": 867, "end": 896, "text": "Venkatapathy and Joshi, 2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Much previous research has used a combination of statistics and distributional approaches whereby distributional similarity is used to compare the constituents of the multiword with the multiword itself. In this paper, we will investigate the use of selectional preferences of verbs. We will use the preferences to find atypical verb-object combinations as we anticipate that such combinations are more likely to be non-compositional.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Selectional preferences of predicates have been modelled using the man-made thesaurus Word-Net (Fellbaum, 1998) , see for example (Resnik, 1993; Li and Abe, 1998; Abney and Light, 1999; Clark and Weir, 2002) . There are also distributional approaches which use co-occurrence data to cluster distributionally similar words together. The cluster output can then be used as classes for selectional preferences (Pereira et al., 1993) , or one can directly use frequency information from distributionally similar words for smoothing (Grishman and Sterling, 1994) .", "cite_spans": [ { "start": 86, "end": 111, "text": "Word-Net (Fellbaum, 1998)", "ref_id": null }, { "start": 130, "end": 144, "text": "(Resnik, 1993;", "ref_id": "BIBREF23" }, { "start": 145, "end": 162, "text": "Li and Abe, 1998;", "ref_id": "BIBREF17" }, { "start": 163, "end": 185, "text": "Abney and Light, 1999;", "ref_id": "BIBREF0" }, { "start": 186, "end": 207, "text": "Clark and Weir, 2002)", "ref_id": "BIBREF10" }, { "start": 407, "end": 429, "text": "(Pereira et al., 1993)", "ref_id": "BIBREF22" }, { "start": 528, "end": 557, "text": "(Grishman and Sterling, 1994)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We used three different types of probabilistic models, which vary in the classes selected for representation over which the probability distribution of the argument heads 2 is estimated. Two use WordNet and the other uses the entries in a thesaurus of distributionally similar words acquired automatically following (Lin, 1998) . The first method is due to Li and Abe (1998) . The classes over which the probability distribution is calculated are selected according to the minimum description length principle (MDL) which uses the argument head tokens for finding the best classes for representation. This method has previously been tried for modelling compositionality of verb-particle constructions (Bannard, 2002) .", "cite_spans": [ { "start": 316, "end": 327, "text": "(Lin, 1998)", "ref_id": "BIBREF18" }, { "start": 357, "end": 374, "text": "Li and Abe (1998)", "ref_id": "BIBREF17" }, { "start": 701, "end": 716, "text": "(Bannard, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The other two methods (we refer to them as 'typebased') also calculate a probability distribution using argument head tokens but they select the classes over which the distribution is calculated using the number of argument head types (of a verb in a corpus) in a given class, rather than the number of argument head tokens in contrast to previous WordNet models (Resnik, 1993; Li and Abe, 1998; Clark and Weir, 2002) . For example, if the object slot of the verb park contains the argument heads { car, car, car, car, van, jeep } then the type-based models use the word type \"car\" only once when determining the classes over which the probability distribution is to be estimated. Classes are selected which maximise the number of types that they cover, rather than the number of tokens. This is done to avoid the selectional preferences being heavily influenced by noise from highly frequent arguments which may be polysemous and some or all of their meanings may not be semantically related to the 'prototypical' arguments of the verb. For example car has a gondola sense in WordNet.", "cite_spans": [ { "start": 363, "end": 377, "text": "(Resnik, 1993;", "ref_id": "BIBREF23" }, { "start": 378, "end": 395, "text": "Li and Abe, 1998;", "ref_id": "BIBREF17" }, { "start": 396, "end": 417, "text": "Clark and Weir, 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The third method uses entries in a distributional thesaurus rather than classes from WordNet. The entries used as classes for representation are selected by virtue of the number of argument types they encompass. As with the WordNet models, the tokens are used to estimate a probability distribution over these entries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next section, we discuss related work on identifying compositionality. In section 3, we describe the methods we are using for acquiring our models of selectional preference. In section 4, we test our models on a dataset used in previous research. We compare the three types of models individually and also investigate the best performing model when used in combination with other features used in previous research. We conclude in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most previous work using distributional approaches to compositionality either contrasts distributional information of candidate phrases with constituent words (Schone and Jurafsky, 2001; McCarthy et al., 2003) or uses distributionally similar words to detect nonproductive phrases (Lin, 1999) . Lin (1999) used his method (Lin, 1998) for automatic thesaurus construction. He identified candidate phrases involving several open-class words output from his parser and filtered these by the loglikelihood statistic. Lin proposed that if there is a phrase obtained by substitution of either the head or modifier in the phrase with a 'nearest neighbour' from the thesaurus then the mutual information of this and the original phrase must be significantly different for the original phrase to be considered noncompositional. He evaluated the output manually.", "cite_spans": [ { "start": 159, "end": 186, "text": "(Schone and Jurafsky, 2001;", "ref_id": "BIBREF26" }, { "start": 187, "end": 209, "text": "McCarthy et al., 2003)", "ref_id": "BIBREF20" }, { "start": 281, "end": 292, "text": "(Lin, 1999)", "ref_id": "BIBREF19" }, { "start": 295, "end": 305, "text": "Lin (1999)", "ref_id": "BIBREF19" }, { "start": 322, "end": 333, "text": "(Lin, 1998)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As well as distributional similarity, researchers have used a variety of statistics as indicators of non-compositionality (Blaheta and Johnson, 2001; Krenn and Evert, 2001 ). Fazly and Stevenson (2006) use statistical measures of syntactic behaviour to gauge whether a verb and noun combination is likely to be a idiom. Although they are not specifically detecting compositionality, there is a strong corre-lation between syntactic rigidity and semantic idiosyncrasy. Venkatapathy and Joshi (2005) combine different statistical and distributional methods using support vector machines (SVMs) for identifying noncompositional verb-object combinations. They explored seven features as measures of compositionality:", "cite_spans": [ { "start": 122, "end": 149, "text": "(Blaheta and Johnson, 2001;", "ref_id": "BIBREF6" }, { "start": 150, "end": 171, "text": "Krenn and Evert, 2001", "ref_id": "BIBREF15" }, { "start": 175, "end": 201, "text": "Fazly and Stevenson (2006)", "ref_id": "BIBREF11" }, { "start": 468, "end": 497, "text": "Venkatapathy and Joshi (2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "1. frequency 2. pointwise mutual information (Church and Hanks, 1990) , 3. least mutual information difference with similar collocations, based on (Lin, 1999) and using Lin's thesaurus (Lin, 1998) for obtaining the similar collocations.", "cite_spans": [ { "start": 45, "end": 69, "text": "(Church and Hanks, 1990)", "ref_id": "BIBREF9" }, { "start": 147, "end": 158, "text": "(Lin, 1999)", "ref_id": "BIBREF19" }, { "start": 185, "end": 196, "text": "(Lin, 1998)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "4. The distributed frequency of an object, which takes an average of the frequency of occurrence with an object over all verbs occurring with the object above a threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "5. distributed frequency of an object, using the verb, which considers the similarity between the target verb and the verbs occurring with the target object above the specified threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "6. a latent semantic approach (LSA) based on (Sch\u00fctze, 1998; and considering the dissimilarity of the verb-object pair with its constituent verb 7. the same LSA approach, but considering the similarity of the verb-object pair with the verbal form of the object (to capture support verb constructions e.g. give a smile Venkatapathy and Joshi (2005) produced a dataset of verb-object pairs with human judgements of compositionality. We say more about this dataset and Venkatapathy and Joshi's results in section 4 since we use the dataset for our experiments.", "cite_spans": [ { "start": 45, "end": 60, "text": "(Sch\u00fctze, 1998;", "ref_id": null }, { "start": 318, "end": 347, "text": "Venkatapathy and Joshi (2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we investigate the use of selectional preferences to detect compositionality. Bannard (2002) did some pioneering work to try and establish a link between the compositionality of verb particle constructions and the selectional preferences of the multiword and its constituent verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "His results were hampered by models based on (Li and Abe, 1998) which involved rather uninformative models at the roots of WordNet. There are several reasons for this. The classes for the model are selected using MDL by compromising between a simple model with few classes and one which explains the data well. The models are particularly affected by the quantity of data available (Wagner, 2002) . Also noise from frequent but idiosyncratic or polysemous arguments weakens the signal. There is scope for experimenting with other approaches such as (Clark and Weir, 2002) , however, we feel a type-based approach is worthwhile to avoid the noise introduced from frequent but polysemous arguments and bias from highly frequent arguments which might be part of a multiword rather than a prototypical argument of the predicate in question, for example eat hat. In contrast to Bannard, our experiments are with verb-object combinations rather than verb particle constructions. We compare Li and Abe models with WordNet models which use the number of argument types to obtain the classes for representation of the selectional preferences. In addition to experiments with these WordNet models, we propose models using entries in distributional thesauruses for representing preferences.", "cite_spans": [ { "start": 45, "end": 63, "text": "(Li and Abe, 1998)", "ref_id": "BIBREF17" }, { "start": 382, "end": 396, "text": "(Wagner, 2002)", "ref_id": "BIBREF31" }, { "start": 549, "end": 571, "text": "(Clark and Weir, 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "All models were acquired from verb-object data extracted using the RASP parser (Briscoe and Carroll, 2002) from the 90 million words of written English from the BNC (Leech, 1992) . We extracted verb and common noun tuples where the noun is the argument head of the object relation. The parser was also used to extract the grammatical relation data used for acquisition of the thesaurus described below in section 3.3.", "cite_spans": [ { "start": 79, "end": 106, "text": "(Briscoe and Carroll, 2002)", "ref_id": "BIBREF8" }, { "start": 165, "end": 178, "text": "(Leech, 1992)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Three Methods for Acquiring Selectional Preferences", "sec_num": "3" }, { "text": "This approach is a reimplementation of Li and Abe (1998) . Each selectional preference model (referred to as a tree cut model, or TCM) comprises a set of disjunctive noun classes selected from all the possibilities in the WordNet hyponym hierarchy 3 using MDL (Rissanen, 1978) . The TCM covers all the noun senses in the WordNet hierarchy and is associated with a probability distribution over these noun senses in the hierarchy reflecting the argument head data occurring in the given grammatical relationship with the specified verb. MDL finds the classes in the TCM by considering the cost measured in bits of describing both the model and the argument head data encoded in the model. A compromise is made by having as simple a model as possible using classes further up the hierarchy whilst also providing a good model for the set of argument head tokens (T K).", "cite_spans": [ { "start": 39, "end": 56, "text": "Li and Abe (1998)", "ref_id": "BIBREF17" }, { "start": 260, "end": 276, "text": "(Rissanen, 1978)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "TCMs", "sec_num": "3.1" }, { "text": "The classes are selected by recursing from the top of the WordNet hierarchy comparing the cost (or description length) of using the mother class to the cost of using the hyponym daughter classes. In any path, the mother is preferred unless using the daughters would reduce the cost. If using the daughters for the model is less costly than the mother then the recursion continues to compare the cost of the hyponyms beneath.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TCMs", "sec_num": "3.1" }, { "text": "The cost (or description length) for a set of classes is calculated as the model description length (mdl) and the data description length (ddl) 4 :-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TCMs", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "mdl + ddl k 2 \u00d7 log |T K| + \u2212 tk\u2208T K log p(tk)", "eq_num": "(1)" } ], "section": "TCMs", "sec_num": "3.1" }, { "text": "k, is the number of WordNet classes being currently considered for the TCM minus one. The MDL method uses the size of T K on the assumption that a larger dataset warrants a more detailed model. The cost of describing the argument head data is calculated using the log of the probability estimate from the classes currently being considered for the model. The probability estimate for a class being considered for the model is calculated using the cumulative frequency of all the hyponym nouns under that class that occur in T K, divided by the number of noun senses that these nouns have, to account for their polysemy. This cumulative frequency is also divided by the total number of noun hyponyms under that class in WordNet to obtain a smoothed estimate for all nouns under the class. The probability of the class is obtained by dividing this frequency estimate by the total frequency of the argument heads. The algorithm is described fully by Li and Abe (1998) .", "cite_spans": [ { "start": 947, "end": 964, "text": "Li and Abe (1998)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "TCMs", "sec_num": "3.1" }, { "text": "4 See (Li and Abe, 1998) for a full explanation. A small portion of the TCM for the object slot of park is shown in figure 1. WordNet classes are displayed in boxes with a label which best reflects the meaning of the class. The probability estimates are shown for the classes on the TCM. Examples of the argument head data are displayed below the Word-Net classes with dotted lines indicating membership at a hyponym class beneath these classes. We cannot show the full TCM due to lack of space, but we show some of the higher probability classes which cover some typical nouns that occur as objects of park. Note that probability under the classes abstract entity, way and location arise because of a systematic parsing error where adverbials such as distance in park illegally some distance from the railway station are identified by the parser as objects. Systematic noise from the parser has an impact on all the selectional preference models described in this paper.", "cite_spans": [ { "start": 6, "end": 24, "text": "(Li and Abe, 1998)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "TCMs", "sec_num": "3.1" }, { "text": "We propose a method of acquiring selectional preferences which instead of covering all the noun senses in WordNet, just gives a probability distribution over a portion of prototypical classes, we refer to these models as WNPROTOs. A WNPROTO consists of classes within the noun hierarchy which have the highest proportion of word types occurring in the argument head data, rather than using the number of tokens, or frequency, as is used for the TCMs. This allows less frequent, but potentially informative arguments to have some bearing on the models acquired to reduce the impact of highly frequent but polysemous arguments. We then used the frequency data to populate these selected classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "The classes (C) in the WNPROTO are selected from those which include at least a threshold of 2 argument head types 5 occurring in the training data. Each argument head in the training data is disambiguated according to whichever of the WordNet classes it occurs at or under which has the highest 'type ratio'. Let T Y be the set of argument head types in the object slot of the verb for which we are acquiring the preference model. The type ratio for a class (c) is the ratio of noun types (ty \u2208 T Y ) occurring in the training data also listed at or beneath that class in WordNet to the total number of noun types listed at or beneath that particular class in WordNet (wn ty \u2208 c). The argument types attested in the training data are divided by the number of Word-Net classes that the noun (classes(ty)) belongs to, to account for polysemy in the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "type ratio(c) = ty\u2208T Y \u2208c 1 |classes(ty)| |wn ty \u2208 c|", "eq_num": "(2)" } ], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "If more than one class has the same type ratio then the argument is not used for calculating the probability of the preference model. In this way, only arguments that can be disambiguated are used for calculating the probability distribution. The advantage of using the type ratio to determine the classes used to represent the model and to disambiguate the arguments is that it prevents high frequency verb noun combinations from masking the information from prototypical but low frequency arguments. We wish to use classes which are as representative of the argument head types as possible to help detect when an argument head is not related to these classes and is therefore more likely to be non-compositional.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "For example, the class motor vehicle is selected for the WNPROTO model of the object slot of park even though there are 5 meanings of car in WordNet including elevator car and gondola. There are 174 occurrences of car which overwhelms the frequency of the other objects (e.g. van 11, vehicle 8) but by looking for classes with a high proportion of types (rather than word tokens) car is disambiguated appropriately and the class motor vehicle is selected for representation. 5 We have experimented with a threshold of 3 and obtained similar results. The relative frequency of each class is obtained from the set of disambiguated argument head tokens and used to provide the probability distribution over this set of classes. Note that in WNPROTO, classes can be subsumed by others in the hyponym hierarchy. The probability assigned to a class is applicable to any descendants in the hyponym hierarchy, except those within any hyponym classes within the WNPROTO. The algorithm for selecting C and calculating the probability distribution is shown as Algorithm 1. Note that we use brackets for comments.", "cite_spans": [ { "start": 475, "end": 476, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "In figure 2 we show a small portion of the WN-PROTO for park. Again, WordNet classes are displayed in boxes with a label which best reflects the meaning of the class. The probability estimates are shown in the boxes for all the classes included in the WNPROTO. The classes in the WNPROTO model are shown with dashed lines. Examples of the argument head data are displayed below the WordNet classes with dotted lines indicating membership at a hyponym class beneath these classes. We cannot show the full WNPROTO due to lack of space, but we show some of the classes with higher probability which cover some typical nouns that occur as objects of park. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WNPROTOs", "sec_num": "3.2" }, { "text": "We use a thesaurus acquired using the method proposed by Lin (1998) . For input we used the grammatical relation data from automatic parses of the BNC. For each noun we considered the cooccurring verbs in the object and subject relation, the modifying nouns in noun-noun relations and the modifying adjectives in adjective-noun relations. Each thesaurus entry consists of the target noun and the 50 most similar nouns, according to Lin's measure of distributional similarity, to the target.", "cite_spans": [ { "start": 57, "end": 67, "text": "Lin (1998)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "DSPROTOs", "sec_num": "3.3" }, { "text": "The argument head noun types (T Y ) are used to find the entries in the thesaurus as the 'classes' (C) of the selectional preference for a given verb. As with WNPROTOs, we only cover argument types which form coherent groups with other argument types since we wish i) to remove noise and ii) to be able to identify argument types which are not related with the other types and therefore may be noncompositional. As our starting point we only consider an argument type as a class for C if its entry in the thesaurus covers at least a threshold of 2 types. 6 To select C we use a best first search. This method processes each argument type in T Y in order of the number of the other argument types from T Y that it has in its thesaurus entry of 50 similar nouns. An argument head is selected as a class for C (cty \u2208 C) 7 if it covers at least 2 of the argument heads that are not in the thesaurus entries of any of the other classes already selected for C. Each argument head is disambiguated by whichever class in C under which it is listed in the thesaurus and which has the largest number of the T Y in its thesaurus entry. When the algorithm finishes processing the ordered argument heads to select C, all argument head types are disambiguated by C apart from those which after disambiguation occur in isolation in a class without other argument types. Finally a probability distribution over C is estimated using the frequency (tokens) of argument types that occur in the thesaurus entries for any cty \u2208 C. If an argument type occurs in the entry of more than one cty then it is assigned to whichever of these has the largest number class (p(c)) disambiguated objects (freq) van (0.86) car (174) van (11) vehicle (8) . . . mile (0.05) street (5) distance (4) mile (1) . . . yard (0.03) corner (4) lane (3) door (1) backside (0.02) backside (2) bum (1) butt (1) . . . Figure 3 : First four classes of DSPROTO model for park of disambiguated argument head types and its token frequency is attributed to that class. We show the algorithm as Algorithm 2.", "cite_spans": [ { "start": 555, "end": 556, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 1870, "end": 1878, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "DSPROTOs", "sec_num": "3.3" }, { "text": "The algorithms for WNPROTO algorithm 1 and DSPROTO (algorithm 2) differ because of the nature of the inventories of candidate classes (Word-Net and the distributional thesaurus). There are a great many candidate classes in WordNet. The WN-PROTO algorithm selects the classes from all those that the argument heads belong to directly and indirectly by looping over all argument types to find the class that disambiguates each by having the largest type ratio calculated using the undisambiguated argument heads. The DSPROTO only selects classes from the fixed set of argument types. The algorithm loops over the argument types with at least two argument heads in the thesaurus entry and ordered by the number of undisambiguated argument heads in the thesaurus entry. This is a best first search to minimise the number of argument heads used in C but maximise the coverage of argument types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DSPROTOs", "sec_num": "3.3" }, { "text": "In figure 3, we show part of a DSPROTO model for the object of park. 8 Note again that the class mile arises because of a systematic parsing error where adverbials such as distance in park illegally some distance from the railway station are identified by the parser as objects. Venkatapathy and Joshi (2005) produced a dataset of verb-object pairs with human judgements of compositionality. They obtained values of r s between 0.111 and 0.300 by individually applying the 7 features described above in section 2. The best correlation was given by feature 7 and the second best was feature 3. They combined all 7 features using SVMs and splitting their data into test and training data and achieve a r s of 0.448, which demonstrates significantly better correlation with the human goldstandard than any of the features in isolation", "cite_spans": [ { "start": 279, "end": 308, "text": "Venkatapathy and Joshi (2005)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "DSPROTOs", "sec_num": "3.3" }, { "text": "We evaluated our selectional preference models using the verb-object pairs produced by Venkatapathy and Joshi (2005) . 9 This dataset has 765 verbobject collocations which have been given a rating between 1 and 6, by two annotators (both fluent speakers of English). Kendall's Tau (Siegel and Castellan, 1988) was used to measure agreement, and a score of 0.61 was obtained which was highly significant. The ranks of the two annotators gave a Spearman's rank-correlation coefficient (r s ) of 0.71.", "cite_spans": [ { "start": 87, "end": 116, "text": "Venkatapathy and Joshi (2005)", "ref_id": "BIBREF30" }, { "start": 281, "end": 309, "text": "(Siegel and Castellan, 1988)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The Verb-Object pairs included some adjectives (e.g. happy, difficult, popular), pronouns and complements e.g. become director. We used the subset of 638 verb-object pairs that involved common nouns in the object relationship since our preference models focused on the object relation for common nouns. For each verb-object pair we used the preference models acquired from the RASP parses of the BNC to obtain the probability of the class that this object occurs under. Where the object noun is a member of several classes (classes(noun) \u2208 C) in the model, the class with the largest probability is used. Note though that for WNPROTOs we have the added constraint that a hyponym class from C is selected in preference to a hypernym in C. Compositionality of an object noun and verb is computed as:comp(noun, verb) = max c\u2208classes(noun)\u2208C p(c|verb) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We use the probability of the class, rather than an estimate of the probability of the object, because we want to determine how likely any word belonging to this class might occur with the given verb, rather than the probability of the specific noun which may be infrequent, yet typical, of the objects that occur with this verb. For example, convertible may be an infrequent object of park, but it is quite likely given its membership of the class motor vehicle. We do not want to assume anything about the frequency of non-compositional verb-object combinations, just that they are unlikely to be members of classes which represent prototypical objects. We Table 1 : Correlation scores for 638 verb object pairs will contrast these models with a baseline frequency feature used by Venkatapathy and Joshi. We use our selectional preference models to provide the probability that a candidate is representative of the typical objects of the verb. That is, if the object might typically occur in such a relationship then this should lessen the chance that this verb-object combination is non-compositional. We used the probability of the classes from our 3 selectional preference models to rank the pairs and then used Spearman's rank-correlation coefficient (r s ) to compare these ranks with the ranks from the goldstandard.", "cite_spans": [], "ref_spans": [ { "start": 659, "end": 666, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Our results for the three types of preference models are shown in the first section of table 1. 10 All the correlation values are significant, but we note that using the type based selectional preference models achieves a far greater correlation than using the TCMs. The DSPROTO models achieve the best results which is very encouraging given that they only require raw data and an automatic parser to obtain the grammatical relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We applied 4 of the features used by Venkatapathy and Joshi (2005) 11 and described in section 2 to our subset of 638 items. These features were ob-tained using the same BNC dataset used by Venkatapathy and Joshi which was obtained using Bikel's parser (Bikel, 2004) . We obtained correlation values for these features as shown in table 1 under V&J. These features are feature 1 frequency, feature 2 pointwise mutual information, feature 3 based on (Lin, 1999) and feature 7 LSA feature which considers the similarity of the verb-object pair with the verbal form of the object. Pointwise mutual information did surprisingly well on this 84% subset of the data, however the DSPROTO preferences still outperformed this feature. We combined the DSPROTO and V&J features with an SVM ranking function and used 10 fold cross validation as Venkatapathy and Joshi did. We contrast the result with the V&J features without the preference models. The results in the bottom section of table 1 demonstrate that the preference models can be combined with other features to produce optimal results.", "cite_spans": [ { "start": 253, "end": 266, "text": "(Bikel, 2004)", "ref_id": "BIBREF5" }, { "start": 449, "end": 460, "text": "(Lin, 1999)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We have demonstrated that the selectional preferences of a verbal predicate can be used to indicate if a specific combination with an object is noncompositional. We have shown that selectional preference models which represent prototypical arguments and focus on argument types (rather than tokens) do well at the task. Models produced from distributional thesauruses are the most promising which is encouraging as the technique could be applied to a language without a man-made thesaurus. We find that the probability estimates from our models show a highly significant correlation, and are very promising for detecting non-compositional verb-object pairs, in comparison to individual features used previously. Further comparison of WNPROTOs and DSPROTOs to other WordNet models are warranted to contrast the effect of our proposal for disambiguation using word types with iterative approaches, particularly those of Clark and Weir (2002) . A benefit of the DSPROTOs is that they do not require a hand-crafted inventory. It would also be worthwhile comparing the use of raw data directly, both from the BNC and from google's Web 1T corpus (Brants and Franz, 2006 ) since web counts have been shown to outperform the Clark and Weir models on a pseudo-disambiguation task (Keller and Lapata, 2003) .", "cite_spans": [ { "start": 918, "end": 939, "text": "Clark and Weir (2002)", "ref_id": "BIBREF10" }, { "start": 1140, "end": 1163, "text": "(Brants and Franz, 2006", "ref_id": "BIBREF7" }, { "start": 1271, "end": 1296, "text": "(Keller and Lapata, 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Work", "sec_num": "5" }, { "text": "We believe that preferences should NOT be used in isolation. Whilst a low preference for a noun may be indicative of peculiar semantics, this may not always be the case, for example chew the fat. Certainly it would be worth combining the preferences with other measures, such as syntactic fixedness (Fazly and Stevenson, 2006) . We also believe it is worth targeting features to specific types of constructions, for example light verb constructions undoubtedly warrant special treatment (Stevenson et al., 2003) The selectional preference models we have proposed here might also be applied to other tasks. We hope to use these models in tasks such as diathesis alternation detection (McCarthy, 2000; Tsang and Stevenson, 2004) and contrast with WordNet models previously used for this purpose.", "cite_spans": [ { "start": 299, "end": 326, "text": "(Fazly and Stevenson, 2006)", "ref_id": "BIBREF11" }, { "start": 487, "end": 511, "text": "(Stevenson et al., 2003)", "ref_id": "BIBREF28" }, { "start": 683, "end": 699, "text": "(McCarthy, 2000;", "ref_id": "BIBREF21" }, { "start": 700, "end": 726, "text": "Tsang and Stevenson, 2004)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Directions for Future Work", "sec_num": "5" }, { "text": "We use object to refer to direct objects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Argument heads are the nouns occurring in the object slot of the target verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use WordNet version 2.1 for the work in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As with the WNPROTOs, we experimented with a value of 3 for this threshold and obtained similar results.7 We use cty for the classes of the DSPROTO. These classes are simply groups of nouns which occur under the entry of a noun (ty) in the thesaurus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We cannot show the full model due to lack of space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We show absolute values of correlation following(Venkatapathy and Joshi, 2005).11 The other 3 features performed less well on this dataset so we do not report the details here. This seems to be because they worked particularly well with the adjective and pronoun data in the full dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We acknowledge support from the Royal Society UK for a Dorothy Hodgkin Fellowship to the first author. We thank the anonymous reviewers for their constructive comments on this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Hiding a semantic class hierarchy in a Markov model", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Light", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney and Marc Light. 1999. Hiding a semantic class hierarchy in a Markov model. In Proceedings of the ACL Workshop on Unsupervised Learning in Nat- ural Language Processing, pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An empirical model of multiword expression decomposability", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Dominic", "middle": [], "last": "Widdows", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL Workshop on multiword expressions: analysis, acquisition and treatment", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An empirical model of multiword expression decomposability. In Proceed- ings of the ACL Workshop on multiword expressions: analysis, acquisition and treatment, pages 89-96.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A statistical approach to the semantics of verb-particles", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL Workshop on multiword expressions: analysis, acquisition and treatment", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard, Timothy Baldwin, and Alex Lascarides. 2003. A statistical approach to the semantics of verb-particles. In Proceedings of the ACL Workshop on multiword expressions: analysis, acquisition and treatment, pages 65-72.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Statistical techniques for automatically inferring the semantics of verbparticle constructions", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin. Bannard. 2002. Statistical techniques for automatically inferring the semantics of verb- particle constructions. Technical Report WP-2002- 06, University of Edinburgh, School of Informatics. http://lingo.stanford.edu/pubs/WP-2002-06.pdf.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning about the meaning of verb-particle constructions from corpora", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" } ], "year": 2005, "venue": "Computer Speech and Language", "volume": "19", "issue": "4", "pages": "467--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard. 2005. Learning about the meaning of verb-particle constructions from corpora. Computer Speech and Language, 19(4):467-478.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A distributional analysis of a lexicalized statistical parsing model", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M. Bikel. 2004. A distributional analysis of a lex- icalized statistical parsing model. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2004), Barcelona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised learning of multi-word verbs", "authors": [ { "first": "Don", "middle": [], "last": "Blaheta", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL Workshop on Collocations", "volume": "", "issue": "", "pages": "54--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Don Blaheta and Mark Johnson. 2001. Unsuper- vised learning of multi-word verbs. In Proceedings of the ACL Workshop on Collocations, pages 54-60, Toulouse, France.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Web 1T 5-gram corpus version 1.1", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram corpus version 1.1. Technical Report.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Robust accurate statistical annotation of general text", "authors": [ { "first": "Edward", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "1499--1504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC), pages 1499-1504, Las Palmas, Canary Islands, Spain.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word association norms, mutual information and lexicography", "authors": [ { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Church and Patrick Hanks. 1990. Word asso- ciation norms, mutual information and lexicography. Computational Linguistics, 19(2):263-312.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Class-based probability estimation using a semantic hierarchy", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "2", "pages": "187--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark and David Weir. 2002. Class-based prob- ability estimation using a semantic hierarchy. Compu- tational Linguistics, 28(2):187-206.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatically constructing a lexicon of verb phrase idiomatic combinations", "authors": [ { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006)", "volume": "", "issue": "", "pages": "337--344", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afsaneh Fazly and Suzanne Stevenson. 2006. Automat- ically constructing a lexicon of verb phrase idiomatic combinations. In Proceedings of the 11th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL-2006), pages 337-344, Trento, Italy, April.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "WordNet, An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet, An Elec- tronic Lexical Database. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Generalizing automatically generated selectional patterns", "authors": [ { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "John", "middle": [], "last": "Sterling", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th International Conference of Computational Linguistics. COLING-94", "volume": "I", "issue": "", "pages": "742--747", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Grishman and John Sterling. 1994. Generalizing automatically generated selectional patterns. In Pro- ceedings of the 15th International Conference of Com- putational Linguistics. COLING-94, volume I, pages 742-747.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using the web to obtain frequencies for unseen bigrams", "authors": [ { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "3", "pages": "459--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459-484.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Can we do better than frequency? A case study on extracting PP-verb collocations", "authors": [ { "first": "Brigitte", "middle": [], "last": "Krenn", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Evert", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the ACL Workshop on Collocations", "volume": "", "issue": "", "pages": "39--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brigitte Krenn and Stefan Evert. 2001. Can we do better than frequency? A case study on extracting PP-verb collocations. In Proceedings of the ACL Workshop on Collocations, pages 39-46, Toulouse, France.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "100 million words of English: the British National Corpus", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" } ], "year": 1992, "venue": "Language Research", "volume": "28", "issue": "1", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Leech. 1992. 100 million words of English: the British National Corpus. Language Research, 28(1):1-13.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Generalizing case frames using a thesaurus and the MDL principle", "authors": [ { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Abe", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "2", "pages": "217--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computa- tional Linguistics, 24(2):217-244.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic retrieval and clustering of similar words", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL 98", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL 98, Montreal, Canada.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Automatic identification of noncompositional phrases", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL-99", "volume": "", "issue": "", "pages": "317--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1999. Automatic identification of non- compositional phrases. In Proceedings of ACL-99, pages 317-324, Univeristy of Maryland, College Park, Maryland.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detecting a continuum of compositionality in phrasal verbs", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Keller", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 03 Workshop: Multiword expressions: analysis, acquisition and treatment", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Proceedings of the ACL 03 Workshop: Multi- word expressions: analysis, acquisition and treatment, pages 73-80.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using semantic preferences to identify verbal participation in role switching alternations", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the First Conference of the North American Chapter of the Association for Computational Linguistics. (NAACL)", "volume": "", "issue": "", "pages": "256--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy. 2000. Using semantic preferences to identify verbal participation in role switching alter- nations. In Proceedings of the First Conference of the North American Chapter of the Association for Computational Linguistics. (NAACL), pages 256-263, Seattle,WA.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributional clustering of English words", "authors": [ { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Nattali", "middle": [], "last": "Tishby", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Pereira, Nattali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Pro- ceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183-190.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Selection and Information: A Class-Based Approach to Lexical Relationships", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modelling by shortest data description", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1978, "venue": "Automatica", "volume": "14", "issue": "", "pages": "465--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorma Rissanen. 1978. Modelling by shortest data de- scription. Automatica, 14:465-471.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multiword expressions: A pain in the neck for NLP", "authors": [ { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bond", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Sag, Timothy Baldwin, Francis Bond, Ann Copes- take, and Dan Flickinger. 2002. Multiword expres- sions: A pain in the neck for NLP. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2002), pages 1-15, Mexico City, Mexico.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Is knowledge-free induction of multiword unit dictionary headwords a solved problem?", "authors": [ { "first": "Patrick", "middle": [], "last": "Schone", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing", "volume": "24", "issue": "", "pages": "97--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Schone and Daniel Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem? In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 100-108, Hong Kong. Hinrich Sch\u00fctze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, 24(1):97-123.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Non-Parametric Statistics for the Behavioral Sciences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "N. John", "middle": [], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sidney Siegel and N. John Castellan. 1988. Non- Parametric Statistics for the Behavioral Sciences. McGraw-Hill, New York.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Statistical measures of the semi-productivity of light verb constructions", "authors": [ { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "North", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACL 2004 Workshop on Multiword Expressions: Integrating Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzanne Stevenson, Afsaneh Fazly, and Ryan North. 2003. Statistical measures of the semi-productivity of light verb constructions. In Proceedings of the ACL 2004 Workshop on Multiword Expressions: Integrat- ing Processing, Barcelona, Spain.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Using selectional profile distance to detect verb alternations", "authors": [ { "first": "Vivian", "middle": [], "last": "Tsang", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL Workshop on Computational Lexical Semantics (CLS-04)", "volume": "", "issue": "", "pages": "30--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivian Tsang and Suzanne Stevenson. 2004. Using se- lectional profile distance to detect verb alternations. In Proceedings of NAACL Workshop on Computational Lexical Semantics (CLS-04), pages 30-37, Boston, MA.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Measuring the relative compositionality of verb-noun (v-n) collocations by integrating features", "authors": [ { "first": "Sriram", "middle": [], "last": "Venkatapathy", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the joint conference on Human Language Technology and Empirical methods in Natural Language Processing", "volume": "", "issue": "", "pages": "899--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sriram Venkatapathy and Aravind K. Joshi. 2005. Mea- suring the relative compositionality of verb-noun (v-n) collocations by integrating features. In Proceedings of the joint conference on Human Language Technology and Empirical methods in Natural Language Process- ing, pages 899-906, Vancouver, B.C., Canada.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning thematic role relations for wordnets", "authors": [ { "first": "Andreas", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ESSLLI-2002 Workshop on Machine Learning Approaches in Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Wagner. 2002. Learning thematic role relations for wordnets. In Proceedings of ESSLLI-2002 Work- shop on Machine Learning Approaches in Computa- tional Linguistics, Trento.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "portion of the TCM for the objects of park.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Part of WNPROTO for the object slot of park", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "WNPROTO algorithm C = (){classes in WNPROTO} D = () {disambiguated ty \u2208 T Y } f D = 0 {frequency of disambiguated items} T Y = argument head types {nouns occurring as objects of verb, with associated frequencies} C1 \u2208 W ordN et where |ty \u2208 T Y occurring in c \u2208 C1| > 1 for all ty \u2208 T Y do find c \u2208 classes(ty) \u2208 C1 where c = argmax c typeratio(c) if c & c / \u2208 C then add c to C add ty \u2194 c to D {Disambiguated ty with c} end if end for for all c \u2208 C do if |ty \u2194 c \u2208 D| > 1 then f D = f D + f requency(ty){sum frequencies of types under classes to be used in model} else remove c from C {classes with less than two disambiguated nouns are removed} end if end for for all c \u2208 C do p(c) = f requency-of -all-tys-disambiguated-to-class(c,D) in DSPROTO} D = () {disambiguated ty \u2208 T Y } f D = 0 {frequency of disambiguated items} T Y = argument head types {nouns occurring as objects of verb, with associated frequencies} C1 = cty \u2208 T Y where num-types-in-thesaurus(cty, T Y ) > 1 order C1 by num-types-in-thesaurus(cty, T Y ) {classes ordered by coverage of argument head types} for all cty \u2208 ordered C1 do Dcty = () {disambiguated for this class} for all ty \u2208 T Y where in-thesaurus-entry(cty, ty) do if ty / \u2208 D then add ty to Dcty {types disambiguated to this class only if not disambiguated by a class used already} end if end for if |Dcty| > 1 then add cty to C for all ty \u2208 Dcty do add ty \u2194 cty to D {Disambiguated ty with cty} f D = f D + f requency(ty) end for end if end for for all cty \u2208 C do p(cty) = f requency-of -all-tys-disambiguated-to-class(cty,D)", "uris": null }, "TABREF0": { "text": "/www.cis.upenn.edu/\u02dcsriramv/mywork.html.", "html": null, "content": "
9 This http:/method verb-object dataset is available from TCM WNPROTO DSPROTO frequency (f1) selectional preferences r s p < (one tailed) 0.090 0.0119 0.223 0.00003 0.398 0.00003 features from V&J 0.141 0.00023 MI (f2) 0.274 0.00003 Lin99 (f3) 0.139 0.00023 LSA2 (f7) 0.209 0.00003 combination with SVM f2,3,7 0.413 0.00003 f1,2,3,7 0.419 0.00003 DSPROTO f1,2,3,7 0.454 0.00003
", "num": null, "type_str": "table" } } } }