{ "paper_id": "I13-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:47.141504Z" }, "title": "Full-coverage Identification of English Light Verb Constructions", "authors": [ { "first": "Istv\u00e1n", "middle": [], "last": "Nagy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Szeged", "location": {} }, "email": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Szeged", "location": {} }, "email": "vinczev@inf.u-szeged.hu" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Szeged", "location": {} }, "email": "rfarkas@inf.u-szeged.hu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The identification of light verb constructions (LVC) is an important task for several applications. Previous studies focused on some limited set of light verb constructions. Here, we address the full coverage of LVCs. We investigate the performance of different candidate extraction methods on two English full-coverage LVC annotated corpora, where we found that less severe candidate extraction methods should be applied. Then we follow a machine learning approach that makes use of an extended and rich feature set to select LVCs among extracted candidates.", "pdf_parse": { "paper_id": "I13-1038", "_pdf_hash": "", "abstract": [ { "text": "The identification of light verb constructions (LVC) is an important task for several applications. Previous studies focused on some limited set of light verb constructions. Here, we address the full coverage of LVCs. We investigate the performance of different candidate extraction methods on two English full-coverage LVC annotated corpora, where we found that less severe candidate extraction methods should be applied. Then we follow a machine learning approach that makes use of an extended and rich feature set to select LVCs among extracted candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A multiword expression (MWE) is a lexical unit that consists of more than one orthographical word, i.e. a lexical unit that contains spaces and displays lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasy (Sag et al., 2002; Calzolari et al., 2002) . Light verb constructions (LVCs) (e.g. to take a decision, to take sg into consideration) form a subtype of MWEs, namely, they consist of a nominal and a verbal component where the verb functions as the syntactic head (the whole construction fulfills the role of a verb in the clause), but the semantic head is the noun (i.e. the noun is used in one of its original senses). The verbal component (also called a light verb) usually loses its original sense to some extent. 1 The meaning of LVCs can only partially be computed on the basis of the meanings of their parts and the way they are related to each other (semicompositionality). Thus, the result of translating their parts literally can hardly be considered as the proper translation of the original expression. Moreover, the same syntactic pattern may belong to a LVC (e.g. make a mistake), a literal verb + noun combination (e.g. make a cake) or an idiom (e.g. make a meal (of something)), which suggests that their identification cannot be based on solely syntactic patterns. Since the syntactic and the semantic head of the construction are not the same, they require special treatment when parsing. On the other hand, the same construction may function as an LVC in certain contexts while it is just a productive construction in other ones, compare He gave her a ring made of gold (non-LVC) and He gave her a ring because he wanted to hear her voice (LVC).", "cite_spans": [ { "start": 225, "end": 243, "text": "(Sag et al., 2002;", "ref_id": "BIBREF17" }, { "start": 244, "end": 267, "text": "Calzolari et al., 2002)", "ref_id": "BIBREF3" }, { "start": 741, "end": 742, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In several natural language processing (NLP) applications like information extraction and retrieval, terminology extraction and machine translation, it is important to identify LVCs in context. For example, in machine translation we must know that LVCs form one semantic unit, hence their parts should not be translated separately. For this, LVCs should be identified first in the text to be translated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As we shall show in Section 2, there has been a considerable amount of previous work on LVC detection, but some authors seek to capture just verb-object pairs, while others just verbs with prepositional complements. Actually, many of them exploited only constructions formed with a limited set of light verbs and identified or extracted just a specific type of LVCs. However, we cannot see any benefit that any NLP application could get from these limitations and here, we focus on the full-coverage identification of LVCs. We train and evaluate statistical models on the Wiki50 and Szeged-ParalellFX (SZPFX) (Vincze, 2012) corpora that have recently been published with full-coverage LVC annotation.", "cite_spans": [ { "start": 583, "end": 608, "text": "Szeged-ParalellFX (SZPFX)", "ref_id": null }, { "start": 609, "end": 623, "text": "(Vincze, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We employ a two-stage procedure. First, we identify potential LVC candidates in running texts -we empirically compare various candidate extraction methods -, then we use a machine learning-based classifier that exploits a rich feature set to select LVCs from the candidates. The main contributions of this paper can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce and evaluate systems for identifying all LVCs and all individual LVC occurrences in a running text and we do not restrict ourselves to certain specific types of LVCs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We systematically compare and evaluate different candidate extraction methods (earlier published methods and new solutions implemented by us).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We defined and evaluated several new feature templates like semantic or morphological features to select LVCs in context from extracted candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Two approaches have been introduced for LVC detection. In the first approach, LVC candidates (usually verb-object pairs including one verb from a well-defined set of 3-10 verbs) are extracted from the corpora and these tokens -without contextual information -are then classified as LVCs or not (Stevenson et al., 2004; Tan et al., 2006; Van de Cruys and Moir\u00f3n, 2007; Gurrutxaga and Alegria, 2011) . As a gold standard, lists collected from dictionaries or other annotated corpora are used: if the extracted candidate is classified as an LVC and can be found on the list, it is a true positive, regardless of the fact whether it was a genuine LVC in its context. In the second approach, the goal is to detect individual LVC token instances in a running text, taking contextual information into account (Diab and Bhutada, 2009; Tu and Roth, 2011; . While the first approach assumes that a specific candidate in all of its occurrences constitutes an LVC or not (i.e. there are no ambiguous cases), the second one may account for the fact that there are contexts where a given candidate functions as an LVC whereas in other contexts it does not, recall the example of give a ring in Section 1.", "cite_spans": [ { "start": 294, "end": 318, "text": "(Stevenson et al., 2004;", "ref_id": "BIBREF20" }, { "start": 319, "end": 336, "text": "Tan et al., 2006;", "ref_id": "BIBREF21" }, { "start": 337, "end": 367, "text": "Van de Cruys and Moir\u00f3n, 2007;", "ref_id": "BIBREF23" }, { "start": 368, "end": 397, "text": "Gurrutxaga and Alegria, 2011)", "ref_id": "BIBREF9" }, { "start": 802, "end": 826, "text": "(Diab and Bhutada, 2009;", "ref_id": "BIBREF6" }, { "start": 827, "end": 845, "text": "Tu and Roth, 2011;", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The authors of Stevenson et al. (2004) , , Van de Cruys and Moir\u00f3n (2007) and Gurrutxaga and Alegria (2011) built LVC detection systems with statistical features. Stevenson et al. (2004) focused on classifying LVC candidates containing the verbs make and take. used linguistically motivated statistical measures to distinguish subtypes of verb + noun combinations. However, it is a challenging task to identify rare LVCs in corpus data with statistical-based approaches, since 87% of LVCs occur less than 3 times in the two full-coverage LVC annotated corpora used for evaluation (see Section 3).", "cite_spans": [ { "start": 15, "end": 38, "text": "Stevenson et al. (2004)", "ref_id": "BIBREF20" }, { "start": 50, "end": 73, "text": "Cruys and Moir\u00f3n (2007)", "ref_id": "BIBREF23" }, { "start": 78, "end": 107, "text": "Gurrutxaga and Alegria (2011)", "ref_id": "BIBREF9" }, { "start": 163, "end": 186, "text": "Stevenson et al. (2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A semantic-based method was described in Van de Cruys and Moir\u00f3n (2007) for identifying verb-preposition-noun combinations in Dutch. Their method relies on selectional preferences for both the noun and the verb. Idiomatic and light verb noun + verb combinations were extracted from Basque texts by employing statistical methods (Gurrutxaga and Alegria, 2011) . Diab and Bhutada (2009) employed ruled-based methods to detect LVCs, which are usually based on (shallow) linguistic information, while the domain specificity of the problem was highlighted in .", "cite_spans": [ { "start": 48, "end": 71, "text": "Cruys and Moir\u00f3n (2007)", "ref_id": "BIBREF23" }, { "start": 328, "end": 358, "text": "(Gurrutxaga and Alegria, 2011)", "ref_id": "BIBREF9" }, { "start": 361, "end": 384, "text": "Diab and Bhutada (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Both statistical and linguistic information were applied by the hybrid LVC systems (Tan et al., 2006; Tu and Roth, 2011; Samard\u017ei\u0107 and Merlo, 2010) , which resulted in better recall scores. English and German LVCs were analysed in parallel corpora: the authors of Samard\u017ei\u0107 and Merlo (2010) focus on their manual and automatic alignment. They found that linguistic features (e.g. the degree of compositionality) and the frequency of the construction both have an impact on the alignment of the constructions. Tan et al. (2006) applied machine learning techniques to extract LVCs. They combined statistical and linguistic features, and trained a random forest classifier to separate LVC candidates. Tu and Roth (2011) applied Support Vector Machines to classify verb + noun object pairs on their balanced dataset as candidates for true LVCs 2 or not. They compared the contextual and statistical features and found that local contextual features performed better on ambiguous examples.", "cite_spans": [ { "start": 83, "end": 101, "text": "(Tan et al., 2006;", "ref_id": "BIBREF21" }, { "start": 102, "end": 120, "text": "Tu and Roth, 2011;", "ref_id": "BIBREF22" }, { "start": 121, "end": 147, "text": "Samard\u017ei\u0107 and Merlo, 2010)", "ref_id": "BIBREF18" }, { "start": 509, "end": 526, "text": "Tan et al. (2006)", "ref_id": "BIBREF21" }, { "start": 698, "end": 716, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some of the earlier studies aimed at identifying or extracting only a restricted set of LVCs. Most of them focus on verb-object pairs when identifying LVCs (Stevenson et al., 2004; Tan et al., 2006; Cook et al., 2007; Bannard, 2007; Tu and Roth, 2011) , thus they concentrate on structures like give a decision or take control. With languages other than English, authors often select verb + prepositional object pairs (instead of verb-object pairs) and categorise them as LVCs or not. See, e.g. Van de Cruys and Moir\u00f3n (2007) for Dutch LVC detection or Krenn (2008) for German LVC detection. In other cases, only true LVCs were considered (Stevenson et al., 2004; Tu and Roth, 2011) . In some other studies (Cook et al., 2007; Diab and Bhutada, 2009) the authors just distinguished between the literal and idiomatic uses of verb + noun combinations and LVCs were classified into these two categories as well.", "cite_spans": [ { "start": 156, "end": 180, "text": "(Stevenson et al., 2004;", "ref_id": "BIBREF20" }, { "start": 181, "end": 198, "text": "Tan et al., 2006;", "ref_id": "BIBREF21" }, { "start": 199, "end": 217, "text": "Cook et al., 2007;", "ref_id": "BIBREF4" }, { "start": 218, "end": 232, "text": "Bannard, 2007;", "ref_id": "BIBREF1" }, { "start": 233, "end": 251, "text": "Tu and Roth, 2011)", "ref_id": "BIBREF22" }, { "start": 502, "end": 525, "text": "Cruys and Moir\u00f3n (2007)", "ref_id": "BIBREF23" }, { "start": 553, "end": 565, "text": "Krenn (2008)", "ref_id": "BIBREF13" }, { "start": 639, "end": 663, "text": "(Stevenson et al., 2004;", "ref_id": "BIBREF20" }, { "start": 664, "end": 682, "text": "Tu and Roth, 2011)", "ref_id": "BIBREF22" }, { "start": 707, "end": 726, "text": "(Cook et al., 2007;", "ref_id": "BIBREF4" }, { "start": 727, "end": 750, "text": "Diab and Bhutada, 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to previous works, we seek to identify all LVCs in running texts and do not restrict ourselves to certain types of LVCs. For this reason, we experiment with different candidate extraction methods and we present a machine learning-based approach to select LVCs among candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In our experiments, three freely available corpora were used. Two of them had fully-covered LVC sets manually annotated by professional linguists. The annotation guidelines did not contain any restrictions on the inner syntactic structure of the construction and both true LVCs and vague action verbs were annotated. The Wiki50 contains 50 English Wikipedia articles that were annotated for different types of MWEs (including LVCs) and Named Entities. SZPFX (Vincze, 2012) is an English-Hungarian parallel corpus, in which LVCs are annotated in both languages. It contains texts taken from several domains like fiction, language books and magazines. Here, the English part of the corpus was used.", "cite_spans": [ { "start": 458, "end": 472, "text": "(Vincze, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "In order to compare the performance of our system with others, we also used the dataset of Tu and Roth (2011), which contains 2,162 sentences taken from different parts of the British National Corpus. They only focused on true LVCs in this dataset, and only the verb-object pairs (1,039 positive and 1,123 negative examples) formed with the verbs do, get, give, have, make, take were marked. Statistical data on the three corpora are listed in Table 1 Despite the fact that English verb + prepositional constructions were mostly neglected in previous research, both corpora contain several examples of such structures, e.g. take into consideration or come into contact, the ratio of such LVC lemmas being 11.8% and 9.6% in the Wiki50 and SZPFX corpora, respectively. In addition to the verb + object or verb + prepositional object constructions, there are several other syntactic constructions in which LVCs can occur due to their syntactic flexibility. For instance, the nominal component can become the subject in a passive sentence (the photo has been taken), or it can be extended by a relative clause (the photo that has been taken). These cases are responsible for 7.6% and 19.4% of the LVC occurrences in the Wiki50 and SZPFX corpora, respectively. These types cannot be identified when only verb + object pairs are used for LVC candidate selection.", "cite_spans": [], "ref_spans": [ { "start": 444, "end": 451, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Some researchers filtered LVC candidates by selecting only certain verbs that may be part of the construction, e.g. Tu and Roth (2011) . As the full-coverage annotated corpora were available, we were able to check what percentage of LVCs could be covered with this selection. The six verbs used by Tu and Roth (2011) are responsible for about 49% and 63% of all LVCs in the Wiki50 and the SZPFX corpora, respectively. Furthermore, 62 different light verbs occurred in the Wiki50 and 102 in the SZPFX corpora, respectively. All this indicates that focusing on a reduced set of light verbs will lead to the exclusion of a considerable number of LVCs in free texts.", "cite_spans": [ { "start": 116, "end": 134, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" }, { "start": 298, "end": 316, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Some papers focus only on the identification of true LVCs, neglecting vague action verbs (Stevenson et al., 2004; Tu and Roth, 2011) . However, we cannot see any NLP application that can benefit if such a distinction is made since vague action verbs and true LVCs share those properties that are relevant for natural language processing (e.g. they must be treated as one complex predicate (Vincze, 2012) ). We also argue that it is important to separate LVCs and idioms because LVCs are semiproductive and semi-compositional -which may be exploited in applications like machine translation or information extraction -in contrast to idioms, which have neither feature. All in all, we seek to identify all verbal LVCs (not including idioms) in our study and do not restrict ourselves to certain specific types of LVCs.", "cite_spans": [ { "start": 89, "end": 113, "text": "(Stevenson et al., 2004;", "ref_id": "BIBREF20" }, { "start": 114, "end": 132, "text": "Tu and Roth, 2011)", "ref_id": "BIBREF22" }, { "start": 389, "end": 403, "text": "(Vincze, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Our goal is to identify each LVC occurrence in running texts, i.e. to take input sentences such as 'We often have lunch in this restaurant' and mark each LVC in it. Our basic approach is to syntactically parse each sentence and extract potential LVCs with different candidate extraction methods. Afterwards, a binary classification can be used to automatically classify potential LVCs as LVCs or not. For the automatic classification of candidate LVCs, we implemented a machine learning approach, which is based on a rich feature set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LVC Detection", "sec_num": "4" }, { "text": "As we had two full-coverage LVC annotated corpora where each type and individual occurrence of a LVC was marked in running texts, we were able to examine the characteristics of LVCs in a running text, and evaluate and compare the different candidate extraction methods. When we examined the previously used methods, which just treated the verb-object pairs as potential LVCs, it was revealed that only 73.91% of annotated LVCs on the Wiki50 and 70.61% on the SZPFX had a verb-object syntactic relation. Table 2 shows the distribution of dependency label types provided by the Bohnet parser (Bohnet, 2010) for the Wiki50 and Stanford (Klein and Manning, 2003) and the Bohnet parsers for the SZPFX corpora. In order to compare the efficiency of the parsers, both were applied using the same dependency representation. In this phase, we found that the Bohnet parser was more successful on the SZPFX corpora, i.e. it could cover more LVCs, hence we applied the Bohnet parser in our further experiments.", "cite_spans": [ { "start": 590, "end": 604, "text": "(Bohnet, 2010)", "ref_id": "BIBREF2" }, { "start": 633, "end": 658, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 503, "end": 510, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Candidate Extraction", "sec_num": "4.1" }, { "text": "We define the extended syntax-based candidate extraction method, where besides the verb-direct object dependency relation, the verb-prepositional, verb-relative clause, nounparticipial modifier and verb-subject of a passive construction syntactic relations were also investi-gated among verbs and nouns. Here, 90.76% of LVCs in the Wiki50 and 87.75% in the SZPFX corpus could be identified with the extended syntax-based candidate extraction method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Extraction", "sec_num": "4.1" }, { "text": "It should be added that some rare examples of split LVCs where the nominal component is part of the object, preceded by a quantifying expression like he gained much of his fame can hardly be identified by syntax-based methods since there is no direct link between the verb and the noun. In other cases, the omission of LVCs from candidates is due to the rare and atypical syntactic relation between the noun and the verb (e.g. dep in reach conform). Despite this, such cases are also included in the training and evaluation datasets as positive examples. Our second candidate extractor is the morphology-based candidate extraction method , which was also applied for extracting potential LVCs. In this case, a token sequence was treated as a potential LVC if the POS-tag sequence matched one pattern typical of LVCs (e.g. VERB-NOUN). Although this method was less effective than the extended syntax-based approach, when we merged the extended syntax-based and morphology-based methods, we were able to identify most of the LVCs in the two corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Extraction", "sec_num": "4.1" }, { "text": "The authors of Stevenson et al. (2004) and Tu and Roth (2011) filtered LVC candidates by selecting only certain verbs that could be part of the construction, so we checked what percentage of LVCs could be covered with this selection when we treated just the verb-object pairs as LVC candidates. We found that even the least stringent selec-tion covered only 41.88% of the LVCs in Wiki50 and 47.84% in SZPFX. Hence, we decided to drop any such constraint. Table 3 shows the results we obtained by applying the different candidate extraction methods on the Wiki50 and SZPFX corpora. ", "cite_spans": [ { "start": 15, "end": 38, "text": "Stevenson et al. (2004)", "ref_id": "BIBREF20" }, { "start": 43, "end": 61, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 455, "end": 462, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Candidate Extraction", "sec_num": "4.1" }, { "text": "For the automatic classification of the candidate LVCs we implemented a machine learning approach, which we will elaborate upon below. Our method is based on a rich feature set with the following categories: statistical, lexical, morphological, syntactic, orthographic and semantic. Statistical features: Potential LVCs were collected from 10,000 Wikipedia pages by the union of the morphology-based candidate extraction and the extended syntax-based candidate extraction methods. The number of their occurrences was used as a feature in case the candidate was one of the syntactic phrases collected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "Lexical features: We exploit the fact that the most common verbs are typically light verbs, so we selected fifteen typical light verbs from the list of the most frequent verbs taken from the corpora. In this case, we investigated whether the lemmatised verbal component of the candidate was one of these fifteen verbs. The lemma of the head of the noun was also applied as a lexical feature. The nouns found in LVCs were collected from the corpora, and for each corpus the noun list got from the union of the other two corpora was used. Moreover, we constructed lists of lemmatised LVCs from the corpora and for each corpus, the list got from the union of the other two corpora was utilised. In the case of the Tu&Roth dataset, the list got from Wiki50 and SZPFX was filtered for the six light verbs and true LVCs they contained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "Morphological features: The POS candidate extraction method was used as a feature, so when the POS-tag sequence in the text matched one typical 'POS-pattern' of LVCs, the candidate was marked as true; otherwise as false. The 'Verbal-Stem' binary feature focuses on the stem of the noun. For LVCs, the nominal component is typically one that is derived from a verbal stem (make a decision) or coincides with a verb (have a walk). In this case, the phrases were marked as true if the stem of the nominal component had a verbal nature, i.e. it coincided with a stem of a verb. Do and have are often light verbs, but these verbs may occur as auxiliary verbs too. Hence we defined a feature for the two verbs to denote whether or not they were auxiliary verbs in a given sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "Syntactic features: The dependency label between the noun and the verb can also be exploited in identifying LVCs. As we typically found in the candidate extraction, the syntactic relation between the verb and the nominal component in an LVC is dobj, pobj, rcmod, partmod or nsubjpass -using the Bohnet parser (Bohnet, 2010) , hence these relations were defined as features. The determiner within all candidate LVCs was also encoded as another syntactic feature.", "cite_spans": [ { "start": 309, "end": 323, "text": "(Bohnet, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "Orthographic features: in the case of the 'suffix' feature, it was checked whether the lemma of the noun ended in a given character bi-or trigram. It exploits the fact that many nominal components in LVCs are derived from verbs. The 'number of words' of the candidate LVC was also noted and applied as a feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "Semantic features: In this case we also exploited the fact that the nominal component is derived from verbs. Activity or event semantic senses were looked for among the hypernyms of the noun in WordNet (Fellbaum, 1998) .", "cite_spans": [ { "start": 202, "end": 218, "text": "(Fellbaum, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "We experimented with several learning algorithms and our preliminary results showed that decision trees performed the best. This is probably due to the fact that our feature set consists of a few compact -i.e. high-level -features. We trained the J48 classifier of the WEKA package (Hall et al., 2009) , which implements the decision trees algorithm C4.5 (Quinlan, 1993) with the abovementioned feature set. We report results with Support Vector Machines (SVM) (Cortes and Vapnik, 1995) Table 4 : Results obtained in terms of precision, recall and F-score. DM: dictionary matching. POS: morphology-based candidate extraction. Syntax: extended syntax-based candidate extraction. POS \u222a Syntax: the merged set of the morphology-based and syntax-based candidate extraction methods.", "cite_spans": [ { "start": 282, "end": 301, "text": "(Hall et al., 2009)", "ref_id": "BIBREF10" }, { "start": 355, "end": 370, "text": "(Quinlan, 1993)", "ref_id": "BIBREF16" }, { "start": 461, "end": 486, "text": "(Cortes and Vapnik, 1995)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 487, "end": 494, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Machine Learning Based Candidate Classification", "sec_num": "4.2" }, { "text": "As the investigated corpora were not sufficiently big for splitting them into training and test sets of appropriate size, besides, the different annotation principles ruled out the possibility of enlarging the training sets with another corpus, we evaluated our models in 10-fold cross validation manner on the Wiki50, SZPFX and Tu&Roth datasets. But, in the case of Wiki50 and SZPFX, where only the positive LVCs were annotated, we employed F \u03b2=1 scores interpreted on the positive class as an evaluation metric. Moreover, we treated all potential LVCs as negative which were extracted by different extraction methods but were not marked as positive in the gold standard. The resulting datasets were not balanced and the number of negative examples basically depended on the candidate extraction method applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Roth.", "sec_num": null }, { "text": "However, some positive elements in the corpora were not covered in the candidate classification step, since the candidate extraction methods applied could not detect all LVCs in the corpus data. Hence, we treated the omitted LVCs as false negatives in our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Roth.", "sec_num": null }, { "text": "As a baseline, we applied a context-free dictionary matching method. First, we gathered the goldstandard LVC lemmas from the two other corpora. Then we marked candidates of the union of the extended syntax-based and morphology-based methods as LVC if the candidate light verb and one of its syntactic dependents was found on the list. Table 4 lists the results got on the Wiki50 and SZPFX corpora by using the baseline dictionary matching and our machine learning approach with different machine learning algorithm and different candidate extraction methods.The dictionary matching approach got the highest precision on SZPFX, namely 72.65%. Our machine learningbased approach with different candidate extraction methods demonstrated a consistent performance (i.e. an F-score over 50) on the Wiki50 and SZPFX corpora. It is also seen that our machine learning approach with the union of the morphology-and extended syntax-based candidate extraction methods is the most successful method in the case of Wiki50 and SZPFX. On both corpora, it achieved an F-score that was higher than that of the dictionary matching approach (the difference being 10 and 19 percentage points in the case of Wiki50 and SZPFX, respectively).", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 342, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "In order to compare the performance of our system with others, we evaluated it on the Tu&Roth dataset (Tu and Roth, 2011) too. Table 5 shows the results got using dictionary matching, applying our machine learning-based approach with a rich feature set, and the results published in Tu and Roth (2011) on the Tu&Roth dataset. In this case, the dictionary matching method performed the worst and achieved an accuracy score of 61.25. The results published in Tu and Roth (2011) are good on the positive class with an F-score of 75.36 but the worst with an F-score of 56.41 on the negative class. Therefore this approach achieved an accuracy score that was 7.27 higher than that of the dictionary matching method. Our approach demonstrates a consistent performance (with an Fscore over 70) on the positive and negative classes. It is also seen that our approach is the most successful in the case of the Tu&Roth dataset: it achieved an accuracy score of 72.51%, which is 3.99% higher that got by the Tu&Roth method (Tu and Roth, 2011) Table 5 : Results of applying different methods on the Tu&Roth dataset. DM: dictionary matching. Tu&Roth Original: the results of Tu and Roth (2011) . J48: our model.", "cite_spans": [ { "start": 102, "end": 121, "text": "(Tu and Roth, 2011)", "ref_id": "BIBREF22" }, { "start": 457, "end": 475, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" }, { "start": 1012, "end": 1031, "text": "(Tu and Roth, 2011)", "ref_id": "BIBREF22" }, { "start": 1162, "end": 1180, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 5", "ref_id": null }, { "start": 1032, "end": 1039, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "The applied machine learning-based method extensively outperformed our dictionary matching baseline model, which underlines the fact that our approach can be suitably applied to LVC detection. As Table 4 shows, our presented method proved to be the most robust as it could obtain roughly the same recall, precision and F-score on the Wiki50 and SZPFX corpora. Our system's performance primarily depends on the applied candidate extraction method. In the case of dictionary matching, a higher recall score was primarily limited by the size of the dictionary, but this method managed to achieve a fairly good precision score. As Table 5 indicates, the dictionary matching method was less effective on the Tu&Roth dataset. Since the corpus was created by collecting sentences that contain verb-object pairs with specific verbs, this dataset contains a lot of negative and ambiguous examples besides annotated LVCs, hence the distribution of LVCs in the Tu&Roth dataset is not comparable to those in Wiki50 or SZPFX. In this dataset, only one positive or negative example was annotated in each sentence, and they examined just the verb-object pairs formed with the six verbs as a potential LVC. However, the corpus probably contains other LVCs which were not annotated. For example, in the sentence it have been held that a gift to a charity of shares in a close company gave rise to a charge to capital transfer tax where the company had an interest in possession in a trust, the phrase give rise was listed as a negative example in the Tu&Roth dataset, but have an interest, which is another LVC, was not marked either positive or negative. This is problematic if we would like to evaluate our candidate extractor on this dataset since it would identify this phrase, even if it is restricted to verb-object pairs containing one of the six verbs mentioned above, thus yielding false positives already in the candidate extraction phase.", "cite_spans": [], "ref_spans": [ { "start": 196, "end": 203, "text": "Table 4", "ref_id": null }, { "start": 627, "end": 634, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Moreover, the results got with our machine learning approach overperformed those reported in Tu and Roth (2011) . This may be attributed to the inclusion of a rich feature set with new features like semantic or morphological features that was used in our system, which demonstrated a consistent performance on the positive and negative classes too.", "cite_spans": [ { "start": 93, "end": 111, "text": "Tu and Roth (2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "To examine the effectiveness of each individual feature of the machine learning based candidate classification, we carried out an ablation analysis. Table 6 shows the usefulness of each individual feature type on the SZPFX corpus. Table 6 : The usefulness of individual features in terms of precision, recall and F-score using the SZPFX corpus.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 6", "ref_id": null }, { "start": 231, "end": 238, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For each feature type, we trained a J48 classifier with all of the features except that one. We then compared the performance to that got with all the features. As our ablation analysis shows, each type of feature contributed to the overall performance. The most important feature is the list of the most frequent light verbs. The most common verbs in a language are used very frequently in different contexts, with several argument structures and this may lead to the bleaching (or at least generalization) of its semantic content (Altmann, 2005) . From this perspective, it is linguistically plausible that the most frequent verbs in a language largely coincide with the most typical light verbs since light verbs lose their original meaning to some extent (see e.g. Sanrom\u00e1n Vilas (2009)).", "cite_spans": [ { "start": 532, "end": 547, "text": "(Altmann, 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Besides the ablation analysis we also investigated the decision tree model yielded by our experiments. Similar to the results of our ablation analysis we found that the lexical features were the most powerful, the semantic, syntactic and orthographical features were also useful while statistical and morphological features were less effective but were still exploited by the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Comparing the results on the three corpora, it is salient that the F-score got from applying the methods on the Tu&Roth dataset was considerably better than those got on the other two corpora. This can be explained if we recall that this dataset applies a restricted definition of LVCs, works with only verb-object pairs and, furthermore, it contains constructions with only six light verbs. However, Wiki50 and SZPFX contain all LVCs, they include verb + preposition + noun combinations as well, and they are not restricted to six verbs. All these characteristics demonstrate that identifying LVCs in the latter two corpora is a more realistic and challenging task than identifying them in the artificial Tu&Roth dataset. For example, the very frequent and important LVCs like make a decision, which was one of the most frequent LVCs in the two full-coverage LVC annotated corpora, are ignored if we only focus on identifying true LVCs. It could be detrimental when a higher level NLP application exploits the LVC detector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We also carried out a manual error analysis on the data. We found that in the candidate extraction step, it is primarily POS-tagging or parsing errors that result in the omission of certain LVC candidates. In other cases, the dependency relation between the nominal and verbal component is missing (recall the example of objects with quantifiers) or it is an atypical one (e.g. dep) not included in our list. The lower recall in the case of SZPFX can be attributed to the fact that this corpus contains more instances of nominal occurrences of LVCs (e.g. decision-making or record holder) than Wiki50, which were annotated in the corpora but our morphology-based and extended syntax-based methods were not specifically trained for them since adding POS-patterns like NOUN-NOUN or the corresponding syntactic relations would have resulted in the unnecessary inclusion of many nominal compounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "As for the errors made during classification, it seems that it was hard for the classifier to label longer constructions properly. It was especially true when the LVC occurred in a non-canonical form, as in a relative clause (counterargument that can be made). Constructions with atypical light verbs (e.g. cast a glance) were also somewhat more difficult to find. Nevertheless, some false positives were due to annotation errors in the corpora. A further source of errors was that some literal and productive structures like to give a book (to someone) -which contains one of the most typical light verbs and the noun is homonymous with the verb book \"to reserve\" -are very difficult to distinguish from LVCs and were in turn marked as LVCs. Moreover, the classification of idioms with a syntactic or morphological structure similar to typical LVCs -to have a crush on someone \"to be fond of someone\", which consists of a typical light verb and a deverbal nounwas also not straightforward. In other cases, verbparticle combinations followed by a noun were labeled as LVCs such as make up his mind or give in his notice. Since Wiki50 contains annotated ex-amples for both types of MWEs, the classification of verb + particle/preposition + noun combinations as verb-particle combinations, LVCs or simple verb + prepositional phrase combinations could be a possible direction for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In this paper, we introduced a system that enables the full coverage identification of English LVCs in running texts. Our method detected a broader range of LVCs than previous studies which focused only on certain subtypes of LVCs. We solved the problem in a two-step approach. In the first step, we extracted potential LVCs from a running text and we applied a machine learning-based approach that made use of a rich feature set to classify extracted syntactic phrases in the second step. Moreover, we investigated the performance of different candidate extraction methods in the first step on the two available full-coverage LVC annotated corpora, and we found that owing to the overly strict candidate extraction methods applied, the majority of the LVCs were overlooked. Our results show that a full-coverage identification of LVCs is challenging, but our approach can achieve promising results. The tool can be used in preprocessing steps for e.g. information extraction applications or machine translation systems, where it is necessary to locate lexical items that require special treatment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In the future, we would like to improve our system by conducting a detailed analysis of the effect of the features included. Later, we also plan to investigate how our LVC identification system helps higher level NLP applications. Moreover, we would like to adapt our system to identify other types of MWE and experiment with LVC detection in other languages as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Light verbs may also be defined as semantically empty support verbs, which share their arguments with a noun (see the NomBank project(Meyers et al., 2004)), that is, the term support verb is a hypernym of light verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In theoretical linguistics, two types of LVCs are distinguished(Kearns, 2002). In true LVCs such as to have a laugh we can find a noun that is a conversive of a verb (i.e. it can be used as a verb without any morphological change), while in vague action verbs such as to make an agreement there is a noun derived from a verb (i.e. there is morphological change).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the European Union and the European Social Fund through the project FuturICT.hu (grant no.: T\u00c1MOP-4.2.2.C-11/1/KONV-2012-0013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Diversification processes", "authors": [ { "first": "Gabriel", "middle": [], "last": "Altmann", "suffix": "" } ], "year": 2005, "venue": "Handbook of Quantitative Linguistics", "volume": "", "issue": "", "pages": "646--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Altmann. 2005. Diversification processes. In Handbook of Quantitative Linguistics, pages 646- 659, Berlin. de Gruyter.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A measure of syntactic flexibility for automatically identifying multiword expressions in corpora", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" } ], "year": 2007, "venue": "Proceedings of MWE 2007", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard. 2007. A measure of syntactic flex- ibility for automatically identifying multiword ex- pressions in corpora. In Proceedings of MWE 2007, pages 1-8, Morristown, NJ, USA. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Top accuracy and fast dependency parsing is not a contradiction", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Coling 2010", "volume": "", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2010. Top accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of Coling 2010, pages 89-97.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Towards best practice for multiword expressions in computational lexicons", "authors": [ { "first": "Nicoletta", "middle": [], "last": "Calzolari", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Nancy", "middle": [], "last": "Ide", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Zampolli", "suffix": "" } ], "year": 2002, "venue": "Proceedings of LREC 2002", "volume": "", "issue": "", "pages": "1934--1940", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicoletta Calzolari, Charles Fillmore, Ralph Grishman, Nancy Ide, Alessandro Lenci, Catherine MacLeod, and Antonio Zampolli. 2002. Towards best prac- tice for multiword expressions in computational lex- icons. In Proceedings of LREC 2002, pages 1934- 1940, Las Palmas.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pulling their weight: exploiting syntactic forms for the automatic identification of idiomatic expressions in context", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2007, "venue": "Proceedings of MWE 2007", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2007. Pulling their weight: exploiting syntactic forms for the automatic identification of idiomatic expressions in context. In Proceedings of MWE 2007, pages 41-48, Morristown, NJ, USA. ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Supportvector networks", "authors": [ { "first": "Corinna", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1995, "venue": "Machine Learning", "volume": "20", "issue": "", "pages": "273--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning, 20(3):273- 297.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Verb Noun Construction MWE Token Classification", "authors": [ { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Pravin", "middle": [], "last": "Bhutada", "suffix": "" } ], "year": 2009, "venue": "Proceedings of MWE 2009", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mona Diab and Pravin Bhutada. 2009. Verb Noun Construction MWE Token Classification. In Pro- ceedings of MWE 2009, pages 17-22, Singapore, August. ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distinguishing Subtypes of Multiword Expressions Using Linguistically-Motivated Statistical Measures", "authors": [ { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2007, "venue": "Proceedings of MWE 2007", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afsaneh Fazly and Suzanne Stevenson. 2007. Distin- guishing Subtypes of Multiword Expressions Using Linguistically-Motivated Statistical Measures. In Proceedings of MWE 2007, pages 9-16, Prague, Czech Republic, June. ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "WordNet An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet An Elec- tronic Lexical Database. The MIT Press, Cam- bridge, MA ; London, May.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic Extraction of NV Expressions in Basque: Basic Issues on Cooccurrence Techniques", "authors": [ { "first": "Antton", "middle": [], "last": "Gurrutxaga", "suffix": "" }, { "first": "I\u00f1aki", "middle": [], "last": "Alegria", "suffix": "" } ], "year": 2011, "venue": "Proceedings of MWE 2011", "volume": "", "issue": "", "pages": "2--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antton Gurrutxaga and I\u00f1aki Alegria. 2011. Auto- matic Extraction of NV Expressions in Basque: Ba- sic Issues on Cooccurrence Techniques. In Proceed- ings of MWE 2011, pages 2-7, Portland, Oregon, USA, June. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The WEKA data mining software: an update. SIGKDD Explorations", "authors": [ { "first": "Mark", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Holmes", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Pfahringer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Reutemann", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2009, "venue": "", "volume": "11", "issue": "", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: an update. SIGKDD Explorations, 11(1):10-18.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Annual Meeting of the ACL", "volume": "41", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate unlexicalized parsing. In Annual Meeting of the ACL, volume 41, pages 423-430.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Description of Evaluation Resource -German PP-verb data", "authors": [ { "first": "Brigitte", "middle": [], "last": "Krenn", "suffix": "" } ], "year": 2008, "venue": "Proceedings of MWE 2008", "volume": "", "issue": "", "pages": "7--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brigitte Krenn. 2008. Description of Evaluation Re- source -German PP-verb data. In Proceedings of MWE 2008, pages 7-10, Marrakech, Morocco, June.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The NomBank Project: An Interim Report", "authors": [ { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Szekely", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Zielinska", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Young", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation", "volume": "", "issue": "", "pages": "24--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank Project: An Interim Report. In HLT-NAACL 2004 Work- shop: Frontiers in Corpus Annotation, pages 24-31, Boston, Massachusetts, USA. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Domain-Dependent Identification of Multiword Expressions", "authors": [ { "first": "Istv\u00e1n", "middle": [], "last": "Nagy", "suffix": "" }, { "first": "T", "middle": [], "last": "", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Berend", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the RANLP 2011", "volume": "", "issue": "", "pages": "622--627", "other_ids": {}, "num": null, "urls": [], "raw_text": "Istv\u00e1n Nagy T., Veronika Vincze, and G\u00e1bor Berend. 2011. Domain-Dependent Identification of Multi- word Expressions. In Proceedings of the RANLP 2011, pages 622-627, Hissar, Bulgaria, September. RANLP 2011 Organising Committee.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "Ross", "middle": [], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Ma- teo, CA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multiword Expressions: A Pain in the Neck for NLP", "authors": [ { "first": "A", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Proceedings of CICLing", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword Expressions: A Pain in the Neck for NLP. In Proceedings of CICLing 2002, pages 1-15, Mexico City, Mexico.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cross-lingual variation of light verb constructions: Using parallel corpora and automatic alignment for linguistic research", "authors": [ { "first": "Tanja", "middle": [], "last": "Samard\u017ei\u0107", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Merlo", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Workshop on NLP and Linguistics: Finding the Common Ground", "volume": "", "issue": "", "pages": "52--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanja Samard\u017ei\u0107 and Paola Merlo. 2010. Cross-lingual variation of light verb constructions: Using parallel corpora and automatic alignment for linguistic re- search. In Proceedings of the 2010 Workshop on NLP and Linguistics: Finding the Common Ground, pages 52-60, Uppsala, Sweden, July. ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards a semantically oriented selection of the values of Oper 1 . The case of golpe 'blow' in Spanish", "authors": [ { "first": "", "middle": [], "last": "Bego\u00f1a Sanrom\u00e1n", "suffix": "" }, { "first": "", "middle": [], "last": "Vilas", "suffix": "" } ], "year": 2009, "venue": "Proceedings of MTT 2009", "volume": "", "issue": "", "pages": "327--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bego\u00f1a Sanrom\u00e1n Vilas. 2009. Towards a semanti- cally oriented selection of the values of Oper 1 . The case of golpe 'blow' in Spanish. In Proceedings of MTT 2009, pages 327-337, Montreal, Canada. Uni- versit\u00e9 de Montr\u00e9al.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Statistical Measures of the Semi-Productivity of Light Verb Constructions", "authors": [ { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "North", "suffix": "" } ], "year": 2004, "venue": "MWE 2004", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzanne Stevenson, Afsaneh Fazly, and Ryan North. 2004. Statistical Measures of the Semi-Productivity of Light Verb Constructions. In MWE 2004, pages 1-8, Barcelona, Spain, July. ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extending corpus-based identification of light verb constructions using a supervised learning framework", "authors": [ { "first": "Min-Yen", "middle": [], "last": "Yee Fan Tan", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Kan", "suffix": "" }, { "first": "", "middle": [], "last": "Cui", "suffix": "" } ], "year": 2006, "venue": "Proceedings of MWE 2006", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Fan Tan, Min-Yen Kan, and Hang Cui. 2006. Extending corpus-based identification of light verb constructions using a supervised learning frame- work. In Proceedings of MWE 2006, pages 49-56, Trento, Italy, April. ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning English Light Verb Constructions: Contextual or Statistical", "authors": [ { "first": "Yuancheng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "Proceedings of MWE 2011", "volume": "", "issue": "", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuancheng Tu and Dan Roth. 2011. Learning English Light Verb Constructions: Contextual or Statistical. In Proceedings of MWE 2011, pages 31-39, Port- land, Oregon, USA, June. ACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantics-based multiword expression extraction", "authors": [ { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" }, { "first": "Bego\u00f1a", "middle": [], "last": "Villada Moir\u00f3n", "suffix": "" } ], "year": 2007, "venue": "Proceedings of MWE 2007", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Van de Cruys and Bego\u00f1a Villada Moir\u00f3n. 2007. Semantics-based multiword expression extraction. In Proceedings of MWE 2007, pages 25-32, Mor- ristown, NJ, USA. ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Multiword Expressions and Named Entities in the Wiki50 Corpus", "authors": [ { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Istv\u00e1n", "middle": [], "last": "Nagy", "suffix": "" }, { "first": "T", "middle": [], "last": "", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Berend", "suffix": "" } ], "year": 2011, "venue": "Proceedings of RANLP 2011", "volume": "", "issue": "", "pages": "289--295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veronika Vincze, Istv\u00e1n Nagy T., and G\u00e1bor Berend. 2011. Multiword Expressions and Named Entities in the Wiki50 Corpus. In Proceedings of RANLP 2011, pages 289-295, Hissar, Bulgaria, September. RANLP 2011 Organising Committee.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Light Verb Constructions in the SzegedParalellFX English-Hungarian Parallel Corpus", "authors": [ { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" } ], "year": 2012, "venue": "Proceedings of LREC 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veronika Vincze. 2012. Light Verb Constructions in the SzegedParalellFX English-Hungarian Paral- lel Corpus. In Proceedings of LREC 2012, Istanbul, Turkey.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "num": null, "content": "
CorpusSent.Tokens LVCs LVC lemma
Wiki504,350114,570368287
SZPFX14,262 298,948 1,371706
Tu&Roth2,16265,0601,039430
", "html": null, "text": "." }, "TABREF1": { "type_str": "table", "num": null, "content": "", "html": null, "text": "Statistical data on LVCs in the Wiki50 and SZPFX corpora and the Tu&Roth dataset." }, "TABREF3": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "" }, "TABREF5": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "The recall of candidate extraction approaches. dobj: verb-object pairs. POS:" }, "TABREF6": { "type_str": "table", "num": null, "content": "
MethodWiki50SZPFX
J48SVMJ48SVM
Prec.Rec.F-score Prec.Rec.F-score Prec.Rec.F-score Prec.Rec.F-score
DM56.11 36.2644.0556.11 36.2644.0572.65 27.8340.2472.65 27.8340.24
POS60.6546.252.4554.148.6451.2366.12 43.0252.1254.88 42.4247.85
Syntax61.29 47.5553.5550.99 51.6351.3163.25 56.1759.554.38 54.0354.2
POS\u222aSyntax 58.99 51.0954.7649.72 51.3650.5263.29 56.9159.9355.84 55.1455.49
", "html": null, "text": "as well, to compare our methods withTu &" }, "TABREF7": { "type_str": "table", "num": null, "content": "
MethodAccuracyF1+F1-
DM61.2556.96 64.76
Tu&Roth Original68.5275.36 56.41
J4872.5174.7370.5
", "html": null, "text": "(68.52%)." } } } }