{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:00.119146Z" }, "title": "Contextualized Embeddings Encode Monolingual and Cross-lingual Knowledge of Idiomaticity", "authors": [ { "first": "Samin", "middle": [], "last": "Fakharian", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of New Brunswick Fredericton", "location": { "postCode": "E3B 5A3", "region": "NB", "country": "Canada" } }, "email": "samin.fakharian@unb.ca" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of New Brunswick Fredericton", "location": { "postCode": "E3B 5A3", "region": "NB", "country": "Canada" } }, "email": "paul.cook@unb.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Potentially idiomatic expressions (PIEs) are ambiguous between non-compositional idiomatic interpretations and transparent literal interpretations. For example, hit the road can have an idiomatic meaning corresponding to 'start a journey' or have a literal interpretation. In this paper we propose a supervised model based on contextualized embeddings for predicting whether usages of PIEs are idiomatic or literal. We consider monolingual experiments for English and Russian, and show that the proposed model outperforms previous approaches, including in the case that the model is tested on instances of PIE types that were not observed during training. We then consider cross-lingual experiments in which the model is trained on PIE instances in one language, English or Russian, and tested on the other language. We find that the model outperforms baselines in this setting. These findings suggest that contextualized embeddings are able to learn representations that encode knowledge of idiomaticity that is not restricted to specific expressions, nor to a specific language.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Potentially idiomatic expressions (PIEs) are ambiguous between non-compositional idiomatic interpretations and transparent literal interpretations. For example, hit the road can have an idiomatic meaning corresponding to 'start a journey' or have a literal interpretation. In this paper we propose a supervised model based on contextualized embeddings for predicting whether usages of PIEs are idiomatic or literal. We consider monolingual experiments for English and Russian, and show that the proposed model outperforms previous approaches, including in the case that the model is tested on instances of PIE types that were not observed during training. We then consider cross-lingual experiments in which the model is trained on PIE instances in one language, English or Russian, and tested on the other language. We find that the model outperforms baselines in this setting. These findings suggest that contextualized embeddings are able to learn representations that encode knowledge of idiomaticity that is not restricted to specific expressions, nor to a specific language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multiword expressions (MWEs) are lexicalized combinations of multiple words, which display some form of idiomaticity (Baldwin and Kim, 2010) . In this paper we focus on potentiallyidiomatic expressions (PIEs), i.e., expressions which are ambiguous between a semanticallyopaque idiomatic interpretation, and a compositional literal meaning. In the following example, the English PIE hit the road has an idiomatic meaning corresponding roughly to 'start a journey': 1. The marchers had hit the road before 0500 hours and by midday they were limping back having achieved success on day one.", "cite_spans": [ { "start": 117, "end": 140, "text": "(Baldwin and Kim, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, hit the road, can also be used literally, as in the example below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Two climbers dislodged another huge block which hit the road within 18 inches of one of the estate's senior guides. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "PIEs occur across languages, with one particularly common class of PIE cross-lingually being verb-noun combinations (VNCs, Fazly et al., 2009) -i.e., PIEs consisting of a verb with a noun in its direct object position -such as hit the road in the example above. Although VNCs are common, PIEs also occur in other syntactic constructions, with English examples including combinations of a verb and prepositional phrase -e.g., skating on thin ice (which can be used idiomatically to mean roughly 'at risk') -and prepositional phrasese.g., off the hook (with a potential idiomatic meaning of roughly 'out of danger'). Distinguishing between literal and idiomatic usages of PIEs could be particularly important for down-stream natural language processing applications such as machine translation (Isabelle et al., 2017) .", "cite_spans": [ { "start": 116, "end": 142, "text": "(VNCs, Fazly et al., 2009)", "ref_id": null }, { "start": 792, "end": 815, "text": "(Isabelle et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has considered both unsupervised and supervised approaches to predicting the tokenlevel idiomaticity of PIEs. However, annotated data to train supervised approaches is not available for all PIEs in all languages. This makes unsupervised approaches (e.g., Fazly et al., 2009; Haagsma et al., 2018; Liu and Hwa, 2018; Kurfal\u0131 and \u00d6stling, 2020) , which do not have this resource requirement, appealing. On the other hand, supervised approaches (e.g., Salton et al., 2016; King and Cook, 2018) tend to outperform unsupervised approaches, but are restricted to languages and PIEs for which annotated training data is available.", "cite_spans": [ { "start": 269, "end": 288, "text": "Fazly et al., 2009;", "ref_id": "BIBREF6" }, { "start": 289, "end": 310, "text": "Haagsma et al., 2018;", "ref_id": "BIBREF8" }, { "start": 311, "end": 329, "text": "Liu and Hwa, 2018;", "ref_id": "BIBREF17" }, { "start": 330, "end": 356, "text": "Kurfal\u0131 and \u00d6stling, 2020)", "ref_id": "BIBREF16" }, { "start": 463, "end": 483, "text": "Salton et al., 2016;", "ref_id": "BIBREF21" }, { "start": 484, "end": 504, "text": "King and Cook, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we consider supervised approaches based on contextualized embeddings (Devlin et al., 2019; Liu et al., 2019; Kuratov and Arkhipov, 2019) to predicting usages of PIEs as idiomatic or literal; however, we measure the ability of these approaches to generalize to expressions that were not observed during training, and also to generalize across languages. We begin by considering monolingual experiments for English and Russian in which we train and test on instances of the same PIEs. For English, we focus on VNCs (Cook et al., 2008) . For Russian, we consider a wider-range of types of PIEs (Aharodnik et al., 2018) . We then consider a second monolingual setting in which we evaluate on PIEs, again either English or Russian, that were not observed during training. Finally, we consider cross-lingual detection of idiomaticity. Here we train on instances of PIEs in one language, English or Russian, and evaluate on instances of PIEs in the other language.", "cite_spans": [ { "start": 83, "end": 104, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF4" }, { "start": 105, "end": 122, "text": "Liu et al., 2019;", "ref_id": "BIBREF18" }, { "start": 123, "end": 150, "text": "Kuratov and Arkhipov, 2019)", "ref_id": "BIBREF15" }, { "start": 527, "end": 546, "text": "(Cook et al., 2008)", "ref_id": "BIBREF3" }, { "start": 605, "end": 629, "text": "(Aharodnik et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our findings evaluating on expressions that were observed during training are similar to those of (Kurfal\u0131 and \u00d6stling, 2020); we achieve strong improvements over baselines, and on English outperform previous approaches based on conventional word embeddings (King and Cook, 2018) . In monolingual experiments evaluating on PIEs that were not observed during training, we again improve over baselines, and in the case of English, also over a strong linguistically-informed unsupervised baseline. In cross-lingual experiments, in which the model is evaluated on instances of PIEs in a language that was not observed during training, we again improve over baselines, and remarkably observe performance roughly on par with that of monolingual experiments evaluating on expressions not observed during training. These findings suggest that contextualized embeddings are able to learn representations that encode knowledge of idiomaticity that is not restricted to specific expressions, nor to a specific language.", "cite_spans": [ { "start": 258, "end": 279, "text": "(King and Cook, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has considered unsupervised and supervised approaches to predicting the token-level idiomaticity of PIEs. Although unsupervised methods have been proposed to disambiguate a wide range of kinds of potentially-idiomatic expressions (Haagsma et al., 2018; Liu and Hwa, 2018; Kurfal\u0131 and \u00d6stling, 2020) , and are not limited to languages and types of PIEs for which training data is available, these approaches tend to not perform as well as supervised approaches.", "cite_spans": [ { "start": 244, "end": 266, "text": "(Haagsma et al., 2018;", "ref_id": "BIBREF8" }, { "start": 267, "end": 285, "text": "Liu and Hwa, 2018;", "ref_id": "BIBREF17" }, { "start": 286, "end": 312, "text": "Kurfal\u0131 and \u00d6stling, 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Focusing on specific languages and types of expressions can improve unsupervised approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For example, focusing on VNCs, the idiomatic interpretations of VNCs are typically lexicosyntactically fixed. Returning to the hit the road example from Section 1, the idiomatic interpretation is typically not accessible if the determiner is indefinite (e.g., hit a road), the noun is plural (e.g., hit the roads), or the voice is passive (e.g., the road was hit); in such cases typically only the literal interpretation is available. Fazly et al. (2009) propose an unsupervised statistical method based on the lexico-syntactic fixedness of VNCs to determine the canonical forms -with respect to the determiner, number of the noun, and voice of the verb -of VNCs. They observe that idiomatic usages of VNCs tend to occur in canonical forms, and that literal usages tend to occur in non-canonical forms. A strong, linguistically-informed unsupervised baseline for distinguishing literal from idiomatic VNC usages is therefore to label canonical form usages as idiomatic, and non-canonical form usages as literal. Salton et al. (2016) propose a supervised approach to predicting the token-level idiomaticity of PIEs, focusing on English VNCs, based on training an SVM on skip-thoughts (Kiros et al., 2015) representations of sentences containing PIEs. King and Cook (2018) achieve better results using a simpler sentence representation based on average of word embeddings. Moreover, King and Cook show that adding a single binary feature to the sentence representation indicating whether the VNC occurs in a canonical form -based on the method of Fazly et al. (2009) -gives substantial improvements. Hashempour and Villavicencio (2020) propose a supervised approach in which PIE instances are treated as single units by fusing their lexicalized component words, and learning representations of these units using word and contextualized (Melamud et al., 2016; Devlin et al., 2019) embeddings. Hashempour and Villavicencio also focus on VNCs. Although they show improvements by treating VNC instances as fused units, they do not outperform King and Cook; they do, however, train their models on smaller corpora. Shwartz and Dagan (2019) use representations of spans of tokens based on contextualized embedding for predicting a range of MWE properties. Most closely related to our work, they consider light-verb construction and verb-particle construction classification, for both of which there is an ambiguity between MWE usages and similar-on-the-surface literal combina-tions. Shwartz and Dagan do not, however, consider English VNCs or Russian idioms as we do. Kurfal\u0131 and \u00d6stling (2020) propose a supervised approach to classifying instances of potentiallyidiomatic expressions, as idiomatic or literal, based on contextualized embeddings. They represent MWE instances as the average of the contextual embeddings for the tokenized pieces of their lexicalized component words, which are lemmatized in a preprocessing step, and use a single-layer perceptron for classification. Their findings indicate that their approach improves over previous approaches on English and German PIEs. In this paper, similarly to Kurfal\u0131 and \u00d6stling, we consider an approach based on contextualized embeddings, but we consider experimental setups in which classifiers are evaluated on expressions, and also languages, that are unobserved during training.", "cite_spans": [ { "start": 435, "end": 454, "text": "Fazly et al. (2009)", "ref_id": "BIBREF6" }, { "start": 1012, "end": 1032, "text": "Salton et al. (2016)", "ref_id": "BIBREF21" }, { "start": 1183, "end": 1203, "text": "(Kiros et al., 2015)", "ref_id": "BIBREF14" }, { "start": 1250, "end": 1270, "text": "King and Cook (2018)", "ref_id": "BIBREF12" }, { "start": 1545, "end": 1564, "text": "Fazly et al. (2009)", "ref_id": "BIBREF6" }, { "start": 1834, "end": 1856, "text": "(Melamud et al., 2016;", "ref_id": "BIBREF19" }, { "start": 1857, "end": 1877, "text": "Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 2561, "end": 2587, "text": "Kurfal\u0131 and \u00d6stling (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous supervised approaches to identifying idiomatic instances of PIEs have represented PIE instances with sentence embeddings (Salton et al., 2016; King and Cook, 2018) . We consider a similar approach here using contextualized embeddings from BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019) , RuBERT (Kuratov and Arkhipov, 2019) , and mBERT (Devlin et al., 2019) . Specifically we represent a PIE instance using the CLS (classification) token for the context in which it occurs. 2 For representing English PIEs we use the sentence in which the target expression occurs as the context. For representing Russian PIEs, the dataset we use (discussed in Section 4.1) does not include sentence segmentation, and so we instead use a context of up to 300 characters to the left and right of the target expression. 3 Because we focus on VNCs for English experiments, following King and Cook (2018) , for our monolingual experiments on English VNCs, we also consider whether incorporating informa- 2 In preliminary experiments we also considered representations of English VNC instances formed by averaging and concatenating contextualized representations of the verb and noun components of a target VNC (where the verb and noun representations are themselves averages of the representations of the word pieces they are segmented into). We found these approaches to perform roughly on par with representing VNC instances using the CLS token, and so only consider this approach here. 3 We did not attempt to tune this context window size, although there is scope to do so in future work. tion about lexico-syntactic fixedness of VNCs into our approach gives improvements. Specifically, we concatenate a single binary feature indicating whether a VNC usage is in a canonical form, referred to as CF, with the representation of the CLS token.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Salton et al., 2016;", "ref_id": "BIBREF21" }, { "start": 152, "end": 172, "text": "King and Cook, 2018)", "ref_id": "BIBREF12" }, { "start": 253, "end": 274, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 285, "end": 303, "text": "(Liu et al., 2019)", "ref_id": "BIBREF18" }, { "start": 313, "end": 341, "text": "(Kuratov and Arkhipov, 2019)", "ref_id": "BIBREF15" }, { "start": 354, "end": 375, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 819, "end": 820, "text": "3", "ref_id": null }, { "start": 881, "end": 901, "text": "King and Cook (2018)", "ref_id": "BIBREF12" }, { "start": 1001, "end": 1002, "text": "2", "ref_id": null }, { "start": 1486, "end": 1487, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Predicting PIE Idiomaticity with Contextualized Embeddings", "sec_num": "3" }, { "text": "We fine tune pre-trained BERT, RoBERTa, Ru-BERT, and mBERT models for binary classification of PIE token instances as idiomatic or literal. We use two fully-connected layers on top of the contextualized embedding model. The first layer has the same dimensionality as the representation of the VNC (i.e., 768 dimensions, the hidden layer size of each of the contextualized embedding models considered, and an additional dimension when the CF feature is used) and uses the ReLU activation function. The second layer has 512 dimensions and uses the softmax activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting PIE Idiomaticity with Contextualized Embeddings", "sec_num": "3" }, { "text": "In this section we describe our datasets (Section 4.1), experimental setups and evaluation metric (Section 4.2), and then the implementation of our models and the parameter settings used (Section 4.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Materials and Methods", "sec_num": "4" }, { "text": "Following Salton et al. (2016) , King and Cook (2018) , and Hashempour and Villavicencio (2020), for English, we use the VNC-Tokens dataset (Cook et al., 2008) , which consists of English VNC usages extracted from the British National Corpus (Burnard, 2000) ioms with a range of syntactic constructions including preposition+noun, preposition+adj+noun, and VNCs. The dataset consists of three sections containing classical prose, modern prose, and text from Russian Wikipedia. We consider only the Russian Wikipedia portion because classical prose is substantially older than the text in the English VNC-Tokens dataset (which is from the British National corpus, which primarily includes texts from the late twentieth century), and the modern prose portion is relatively small compared to the Russian Wikipedia portion, which includes roughly 500M tokens. Each instance is accompanied by a context window of up to three paragraphs. Meta-data for this dataset indicating the location of the target expression in the context unfortunately does not appear to be available. We therefore restrict our experiments to the subset of this dataset for which there is an exact match between the target expression and a token sequence in the context. This gives a dataset consisting of 37 expressions and 775 token instances. 5 The dataset is again roughly balanced between idiomatic and literal usages with 54.3% being idiomatic. In contrast to the English dataset, we do not split this Russian dataset at the type level into separate DEV and TEST datasets because we carry out no hyper-parameter tuning on this dataset. We refer to this dataset as RUSSIAN.", "cite_spans": [ { "start": 10, "end": 30, "text": "Salton et al. (2016)", "ref_id": "BIBREF21" }, { "start": 33, "end": 53, "text": "King and Cook (2018)", "ref_id": "BIBREF12" }, { "start": 140, "end": 159, "text": "(Cook et al., 2008)", "ref_id": "BIBREF3" }, { "start": 242, "end": 257, "text": "(Burnard, 2000)", "ref_id": "BIBREF2" }, { "start": 1314, "end": 1315, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "Statistics for the number of PIE types and tokens, and the percentage of idiomatic tokens, in each dataset, are given in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We first consider an experimental setup similar to King and Cook (2018) and Kurfal\u0131 and \u00d6stling (2020) , referred to here as \"all expressions\". In this monolingual experimental setup we train and test on instances of the same PIEs in the same language. For each of EN-DEV, EN-TEST, and RUSSIAN, we randomly partition the instances into training (roughly 75%) and testing (roughly 25%) sets, keeping the ratio of idiomatic to literal usages of each expression balanced across the training and testing sets. We repeat this random partitioning 10 times. For EN-DEV and EN-TEST we use the same partitions as King and Cook.", "cite_spans": [ { "start": 51, "end": 71, "text": "King and Cook (2018)", "ref_id": "BIBREF12" }, { "start": 76, "end": 102, "text": "Kurfal\u0131 and \u00d6stling (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setups and Evaluation", "sec_num": "4.2" }, { "text": "We do not expect to have annotated instances of all PIE types, limiting the applicability of models developed for the all expressions experimental setup. We are therefore particularly interested in determining whether a supervised model is able to generalize to expressions that were unseen during training. Here we consider a second monolingual experimental setup proposed by Gharbieh et al. (2016) , referred to here as \"unseen expressions\". In these experiments we hold out all instances of one PIE type for testing, and train on all instances of the remaining types (within either EN-DEV, EN-TEST, or RUSSIAN). We repeat this fourteen times for each of EN-DEV and EN-TEST, and 37 times for RUSSIAN, holding out each PIE type once for testing.", "cite_spans": [ { "start": 377, "end": 399, "text": "Gharbieh et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setups and Evaluation", "sec_num": "4.2" }, { "text": "For both experimental setups -i.e., all expressions and unseen expressions -we train and test models on EN-DEV for preliminary experiments and setting parameters. We then report final results by training and testing models on EN-TEST and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setups and Evaluation", "sec_num": "4.2" }, { "text": "Just as we do not expect to have annotated instances of all PIE types for a given language, we also do not expect to have annotated instances of PIEs for all languages. We therefore consider an extension of the monolingual unseen expressions experimental setup in which we evaluate on instances of PIEs in a language that was not observed during training, referred to as \"cross-lingual\". In these experiments we train on either English or Russian, and evaluate on the other language. In particular, we train on either EN-DEV or EN-TEST and evaluate on RUSSIAN, and also train on RUSSIAN and evaluate on each of EN-DEV and EN-TEST.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RUSSIAN.", "sec_num": null }, { "text": "The idiomatic and literal classes for both the English and Russian datasets are roughly balanced (Table 1) . We therefore evaluate using accuracy. For the all expressions experimental setup, we report average accuracy across the 10 runs. In the unseen expressions experimental setup, we repeatedly hold out each expression until all instances of each expression (within either EN-DEV, EN-TEST, or RUSSIAN) have been classified, and then compute accuracy. For the cross-lingual experiments, we simply calculate accuracy over all instances in the dataset used for testing.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 106, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "RUSSIAN.", "sec_num": null }, { "text": "We use Huggingface (Wolf et al., 2020) implementations of BERT, RoBERTa, mBERT, and RuBERT. Specifically we use bert-base-uncased, roberta-base, bert-base-multilingual-cased, and rubert-base-cased. All models have 12 layers and a hidden layer size of 768. The number of parameters for BERT, RoBERTa, mBERT, and RuBERT, is 125M, 125M, 179M, and 180M, respectively. BERT and RoBERTa are trained on uncased and cased English text, respectively. mBERT is trained on text from 104 languages. RuBERT is trained on Russian Wikipedia and Russian news data. We use BERT, RoBERTa, and mBERT for monolingual English experiments; RuBERT and mBERT for monolingual Russian experiments; and mBERT for cross-lingual experiments.", "cite_spans": [ { "start": 19, "end": 38, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Parameter Settings", "sec_num": "4.3" }, { "text": "We train our models using Adam optimizer (Kingma and Ba, 2015) to minimize the crossentropy loss. We use the default dropout of 0.5 for the network layers which are on top of BERT, RoBERTa, mBERT, or RuBERT. For fine-tuning, Devlin et al. (2019) recommend the following parameter settings: batch size of 8, 16, or 32; epochs between 2 and 4; and learning rate of 2e-5, 3e-5, or 5e-5.", "cite_spans": [ { "start": 225, "end": 245, "text": "Devlin et al. (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Parameter Settings", "sec_num": "4.3" }, { "text": "We perform grid search over these parameter settings on EN-DEV for the monolingual all expressions and unseen expressions experimental setups. We report results for the best parameter settings on EN-DEV, and then use only these parameter settings for experiments on EN-TEST and RUSSIAN. For the cross-lingual experiments, we do no further parameter tuning, and report results for the best parameter settings for the unseen expressions experimental setup for EN-DEV. We repeat the experiments 10 times with different random seeds, and report the mean accuracy and standard deviation over the runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Parameter Settings", "sec_num": "4.3" }, { "text": "In this section, we present results for the unseen and all expressions experimental setups, for monolingual experiments on English (Section 5.1) and Russian (Section 5.2). In Section 6 we present results for cross-lingual experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Results", "sec_num": "5" }, { "text": "For English, we compare against three baselines: a most-frequent class (MFC) baseline, the unsupervised approach of Fazly et al. (2009, CForm) based on canonical forms, and the supervised approach of King and Cook (2018) .", "cite_spans": [ { "start": 116, "end": 142, "text": "Fazly et al. (2009, CForm)", "ref_id": null }, { "start": 200, "end": 220, "text": "King and Cook (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "We begin by considering results for the all expressions experimental setup. Results are shown in the top panel of Table 2 (labelled \"All\"). On each dataset, both BERT and RoBERTa outperform all baselines, including King and Cook (2018) when using the canonical form (CF) feature (indicated by \"+CF\" in Table 2 ). This finding demonstrates that contextualized embeddings are able to better capture knowledge of the idiomaticity of PIEs than previous approaches. mBERT performs relatively poorly compared to BERT and RoBERTa, although it still outperforms the baselines, with the exception of King and Cook when using the CF feature.", "cite_spans": [ { "start": 215, "end": 235, "text": "King and Cook (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "We now examine the impact of the CF feature in the all expressions experimental setup. 6 For each model based on contextualized embeddings, incorporating the CF feature gives an improvement, but these improvements are small relative to the standard deviation across runs. This is in contrast to the substantial improvements obtained by King and Cook (2018) when using the CF feature. These findings suggest that contextualized embeddings are able to better capture the linguistic knowledge encoded in this feature than conventional word embeddings, which King and Cook use to represent VNC instances.", "cite_spans": [ { "start": 336, "end": 356, "text": "King and Cook (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "We now consider results for the unseen expressions experimental setup. Results are shown in the bottom panel of Table 2 (labelled \"Unseen\"). On EN-DEV, the best results are again obtained using BERT, however, the accuracy drops substantially on EN-TEST. RoBERTa performs more consistently across EN-DEV and EN-TEST, and performs best on EN-TEST. mBERT again performs relatively poorly compared to BERT and RoBERTa, but nevertheless substantially outperforms the most-frequent class baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "Focusing on the contribution of the CF feature, results for both BERT do not show a clear improvement when incorporating this feature when considering the standard deviation across runs. The impact of this feature in experiments on EN-TEST is similar. This finding again suggests that contextualized embeddings capture much of the linguistic knowledge encoded in this feature. We therefore focus on results for BERT and RoBERTa that do not incorporate the CF feature.", "cite_spans": [ { "start": 65, "end": 69, "text": "BERT", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "Focusing on results for EN-TEST (for which no hyper-parameter tuning was carried out), given the substantial improvements over the most-frequent class baseline, and over the CForm baseline, with the exception of mBERT when accounting for variation across runs, these findings suggest that the classifiers (including the approach of King and Cook) have learned information about the idiomaticity of PIEs, that is not restricted to specific expressions, as in the case of the all expressions experimental setup. Furthermore BERT and RoBERTa (without the CF feature) outperform the approach of King and Cook (2018) , although given the standard deviation across runs, this difference does not appear to be significant for BERT when comparing against the approach of King and Cook when they use the CF feature.", "cite_spans": [ { "start": 332, "end": 346, "text": "King and Cook)", "ref_id": null }, { "start": 591, "end": 611, "text": "King and Cook (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "In experiments until now we have used representations from the final layer of contextualized embedding models (BERT, RoBERTa, and mBERT). We now consider the effect of using different hidden layers, focusing on the unseen expressions ex- Table 3 : % accuracy and standard deviation for the unseen expressions experimental setup on EN-DEV and EN-TEST using BERT and RoBERTa with representations from the indicated layers. The best results for each model and dataset are shown in boldface. perimental setup for BERT and RoBERTa, in an effort to explain the relatively poor performance of BERT here. Results are shown in Table 3 . 7 In all cases, except for BERT on EN-TEST, the final layer performs best. This is inline with the findings of Jawahar et al. (2019) that the upper layers of BERT encode semantic information. For BERT, where accuracy was low on EN-TEST relative to EN-DEV in Table 2 , on EN-TEST the second last layer performs best.", "cite_spans": [ { "start": 628, "end": 629, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 3", "ref_id": null }, { "start": 618, "end": 625, "text": "Table 3", "ref_id": null }, { "start": 886, "end": 893, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "English", "sec_num": "5.1" }, { "text": "For monolingual experiments on Russian, we again consider the all and unseen expressions experimental setups. Here we compare against a mostfrequent class baseline. Although Aharodnik et al. Table 4 : % accuracy and standard deviation for the all and unseen expressions experimental setups on RUS-SIAN for RuBERT, mBERT, and the most-frequent class baseline (MFC). The best accuracy for each experimental setup is shown in boldface.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Russian", "sec_num": "5.2" }, { "text": "(2018) report preliminary results on this dataset, they are not for the same experimental setups that we consider, and so we do not compare against their results. Here we consider RuBERT, a monolingual Russian model, and mBERT, which includes Russian text in its pre-training. For the all and unseen expressions experimental setups we use the best hyper-parameter settings for EN-DEV using BERT for the unseen and all expressions experimental setups, respectively; i.e., we do not do any hyperparameter tuning on RUSSIAN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Russian", "sec_num": "5.2" }, { "text": "Results are shown in Table 4 . We see that in both the all and unseen expressions experimental setups, both RuBERT and mBERT substantially outperform the most-frequent class baseline. We also see that, accounting for variation across runs, the performance of RuBERT and mBERT is similar within each experimental setup.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Russian", "sec_num": "5.2" }, { "text": "These findings add to those of Section 5.1, and again indicate that contextualized embeddings encode knowledge of PIE idiomaticity, although in this case the experiments consider a range of PIE syntactic constructions, as opposed to only VNCs. These findings also again indicate that the classifier for the unseen expressions experimental setup has learned information about the idiomaticity of PIEs that is not restricted to expressions that were observed during training. In the following section we consider whether contextualized embeddings encode knowledge of idiomaticity that can be generalized across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Russian", "sec_num": "5.2" }, { "text": "In this section we consider cross-lingual experiments in which we train on instances of PIEs in a source language, and evaluate on instances of PIEs in a (different) target language. We consider the case of both English-to-Russian and Russianto-English. For English we consider both EN-DEV and EN-TEST. In these experiments we train on the entire source language dataset (i.e., when Russian is the source language we train on RUSSIAN, and when English is the source language we train on either EN-DEV or EN-TEST), and evaluate on the entire target language dataset. We use the best hyper-parameter settings for EN-DEV using BERT for the unseen expressions experimental setup from Section 5.1; i.e., we do not attempt any hyperparameter tuning for this cross-lingual experimental setup. We again compare results against a mostfrequent class baseline, and when English is the target language, also against the unsupervised CForm baseline (Fazly et al., 2009) .", "cite_spans": [ { "start": 936, "end": 956, "text": "(Fazly et al., 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Results", "sec_num": "6" }, { "text": "Results are shown in Table 5 . For English-to-Russian, and Russian-to-English, mBERT outperforms the most-frequent class baseline in each case. In experiments with English as the target language, mBERT also outperforms the CForm baseline, although in the case of EN-DEV the difference does not appear to be significant given the standard deviation across runs. Furthermore, the results are, remarkably, roughly on par with monolingual results for the unseen expressions experimental setup. Focusing on experiments involving EN-TEST and RUSSIAN, where for both datasets no hyper-parameter tuning was considered in previous experiments, for English-to-Russian (i.e., EN-TEST source, RUSSIAN target) mBERT achieves 72.4% accuracy, whereas in the monolingual Russian unseen expressions experimental setup, RuBERT and mBERT achieve accuracies of 74.6% and 73.6%, respectively (Table 4 ). These differences are relatively small considering the standard deviations across runs. For Russian-to-English (i.e., RUSSIAN source, EN-TEST target) mBERT achieves an accuracy of 80.1%, while the accuracies for contextualized embedding models for EN-TEST in the unseen expressions experimental setup range from 74.3% for mBERT to 82.3% for RoBERTa (Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 871, "end": 879, "text": "(Table 4", "ref_id": null }, { "start": 1232, "end": 1240, "text": "(Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Cross-lingual Results", "sec_num": "6" }, { "text": "Whereas the findings for the monolingual unseen expressions experimental setup indicate that the classifier is able to generalize to expressions that are unseen during training, these findings for cross-lingual experiments indicate that the classifier is able to generalize across languages. This suggests that the classifier has learned information about idiomaticity that is not restricted to specific expressions, nor to a specific language. The cross- lingual findings furthermore seem to be inline with the findings of Pires et al. (2019) that cross-lingual transfer with mBERT works reasonably well even when languages do not share the same script (as for English and Russian), but works less well when the languages do not share the same word order (where English is an SVO language, and Russian has freer word-order, but SVO is considered dominant (Dryer, 2013) ).", "cite_spans": [ { "start": 524, "end": 543, "text": "Pires et al. (2019)", "ref_id": "BIBREF20" }, { "start": 856, "end": 869, "text": "(Dryer, 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Results", "sec_num": "6" }, { "text": "In this paper we proposed a supervised model based on contextualized embeddings to predict the idiomaticity of PIE instances. In contrast to most prior work on this topic, we considered the ability of the model to generalize to expressions that were not observed during training, and also to generalize across languages. Code to reproduce these experiments is available. 8 We first considered monolingual experiments for English, focusing on verb-noun combinations, a common type of PIE. In experiments in which we train and test on instances of the same PIEs, we demonstrated that an approach based on contextualized embeddings improves over previous approaches based on conventional word embeddings. We then considered experiments in which we evaluate on PIEs that were not observed during training, and showed that the proposed approach improves over a strong, linguistically-informed unsupervised baseline. We further found that, in con-8 https://github.com/SaminFakharian/Co ntextualized-Embeddings-Encode-Monolingu al-and-Cross-lingual-Knowledge-of-Idioma ticity trast to prior models based on conventional word embeddings, incorporating information about the lexico-syntactic fixedness of VNCs does not lead to clear improvements, suggesting that contextualized embeddings capture this rich linguistic knowledge.", "cite_spans": [ { "start": 371, "end": 372, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In monolingual experiments on Russian we considered a wider range of types of PIEs. Here we showed that, as for English, the proposed approach improves over baselines when evaluating on expressions that were, and were not, observed during training. The experimental setup in which the model is tested on instances of PIE types that were not observed during training is particularly interesting because we do not expect to have annotated instances of all PIE types available for training supervised models. The findings in this experimental setup, for both English and Russian, indicate that the model is capturing knowledge of PIE idiomaticity that is not restricted to specific expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Finally, we considered cross-lingual experiments in which we train on instances of either English or Russian PIEs, and evaluate on PIE instances in the other language. Here the proposed model again improves over baselines, and achieves performance that is roughly on par with that of monolingual experiments in which we evaluate on PIEs that were not observed during training. This finding indicates that contextualized embeddings encode knowledge of PIE idiomaticity that is not restricted to specific expressions, nor to a specific language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "In future work, we plan to further explore crosslingual idiomaticity prediction. We would like to include more languages in the analysis to be able to measure the impact of training on multiple source languages. We further intend to consider including the target language amongst the source languages, to measure the impact of augmenting training data for the target language with data from other languages. Finally, we intend to consider cross-lingual approaches for other MWE prediction tasks, such as predicting noun compound compositionality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "These example sentences are taken, with light editing, from the VNC-Tokens dataset(Cook et al., 2008).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following Salton et al. (2016),King and Cook (2018), and Hashempour and Villavicencio (2020), we ignore instances labelled as unknown in VNC-Tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The entire Russian Wikipedia portion of the dataset consists of 40 expressions and 799 token instances. Restricting the dataset to instances that have an exact match with the target expression therefore still retains the majority of the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not consider the CF feature, which was developed for and evaluated on English VNCs(Fazly et al., 2009), for experiments with mBERT. We are primarily interested in mBERT as a point of comparison for cross-lingual experiments, and so do not incorporate this English-specific knowledge here. We also do not consider the CF feature in experiments on RUSSIAN or in cross-lingual experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Results are only shown for layers 9-12. The overall trend for other layers is that lower layers achieve lower accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is financially supported by the Natural Sciences and Engineering Research Council of Canada, the New Brunswick Innovation Foundation (NBIF), and the University of New Brunswick.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Designing a Russian idiom-annotated corpus", "authors": [ { "first": "Katsiaryna", "middle": [], "last": "Aharodnik", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Peng", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katsiaryna Aharodnik, Anna Feldman, and Jing Peng. 2018. Designing a Russian idiom-annotated cor- pus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multiword expressions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" } ], "year": 2010, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. In Nitin Indurkhya and Fred J. Dam- erau, editors, Handbook of Natural Language Pro- cessing, 2nd edition. CRC Press, Boca Raton, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The British National Corpus Users Reference Guide", "authors": [ { "first": "Lou", "middle": [], "last": "Burnard", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lou Burnard. 2000. The British National Corpus Users Reference Guide. Oxford University Computing Ser- vices.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The VNC-Tokens Dataset", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the LREC Workshop on Towards a Shared Task for Multiword Expressions (MWE 2008)", "volume": "", "issue": "", "pages": "19--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2008. The VNC-Tokens Dataset. In Proceedings of the LREC Workshop on Towards a Shared Task for Multiword Expressions (MWE 2008), pages 19-22, Marrakech, Morocco.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The World Atlas of Language Structures Online", "authors": [ { "first": "Matthew", "middle": [ "S" ], "last": "Dryer", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew S. Dryer. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures On- line. Max Planck Institute for Evolutionary Anthro- pology, Leipzig.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised type and token identification of idiomatic expressions", "authors": [ { "first": "Afsaneh", "middle": [], "last": "Fazly", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics", "volume": "35", "issue": "1", "pages": "61--103", "other_ids": { "DOI": [ "10.1162/coli.08-010-R1-07-048" ] }, "num": null, "urls": [], "raw_text": "Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A word embedding approach to identifying verb-noun idiomatic combinations", "authors": [ { "first": "Waseem", "middle": [], "last": "Gharbieh", "suffix": "" }, { "first": "Virendra", "middle": [], "last": "Bhavsar", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 12th Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "112--118", "other_ids": { "DOI": [ "10.18653/v1/W16-1817" ] }, "num": null, "urls": [], "raw_text": "Waseem Gharbieh, Virendra Bhavsar, and Paul Cook. 2016. A word embedding approach to identifying verb-noun idiomatic combinations. In Proceedings of the 12th Workshop on Multiword Expressions, pages 112-118, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The other side of the coin: Unsupervised disambiguation of potentially idiomatic expressions by contrasting senses", "authors": [ { "first": "Hessel", "middle": [], "last": "Haagsma", "suffix": "" }, { "first": "Malvina", "middle": [], "last": "Nissim", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "178--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hessel Haagsma, Malvina Nissim, and Johan Bos. 2018. The other side of the coin: Unsupervised dis- ambiguation of potentially idiomatic expressions by contrasting senses. In Proceedings of the Joint Work- shop on Linguistic Annotation, Multiword Expres- sions and Constructions (LAW-MWE-CxG-2018), pages 178-184, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Leveraging contextual embeddings and idiom principle for detecting idiomaticity in potentially idiomatic expressions", "authors": [ { "first": "Reyhaneh", "middle": [], "last": "Hashempour", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Villavicencio", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Workshop on the Cognitive Aspects of the Lexicon", "volume": "", "issue": "", "pages": "72--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reyhaneh Hashempour and Aline Villavicencio. 2020. Leveraging contextual embeddings and idiom prin- ciple for detecting idiomaticity in potentially id- iomatic expressions. In Proceedings of the Work- shop on the Cognitive Aspects of the Lexicon, pages 72-80, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A challenge set approach to evaluating machine translation", "authors": [ { "first": "Pierre", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "George", "middle": [], "last": "Foster", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2486--2496", "other_ids": { "DOI": [ "10.18653/v1/D17-1263" ] }, "num": null, "urls": [], "raw_text": "Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2486-2496, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "What does BERT learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Leveraging distributed representations and lexico-syntactic fixedness for token-level prediction of the idiomaticity of English verb-noun combinations", "authors": [ { "first": "Milton", "middle": [], "last": "King", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "345--350", "other_ids": { "DOI": [ "10.18653/v1/P18-2055" ] }, "num": null, "urls": [], "raw_text": "Milton King and Paul Cook. 2018. Leveraging dis- tributed representations and lexico-syntactic fixed- ness for token-level prediction of the idiomaticity of English verb-noun combinations. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 345-350, Melbourne, Australia. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Raquel Urtasun, and Sanja Fidler", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "3276--3284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Ur- tasun, and Sanja Fidler. 2015. Skip-thought vec- tors. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3276-3284. Curran Associates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adaptation of deep bidirectional multilingual transformers for russian language", "authors": [ { "first": "Yuri", "middle": [], "last": "Kuratov", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Arkhipov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.07213" ] }, "num": null, "urls": [], "raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. arXiv preprint arXiv:1905.07213.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Disambiguation of potentially idiomatic expressions with contextual embeddings", "authors": [ { "first": "Murathan", "middle": [], "last": "Kurfal\u0131", "suffix": "" }, { "first": "Robert", "middle": [], "last": "\u00d6stling", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murathan Kurfal\u0131 and Robert \u00d6stling. 2020. Disam- biguation of potentially idiomatic expressions with contextual embeddings. In Proceedings of the Joint Workshop on Multiword Expressions and Electronic Lexicons, pages 85-94, online. Association for Com- putational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Heuristically informed unsupervised idiom usage recognition", "authors": [ { "first": "Changsheng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1723--1731", "other_ids": { "DOI": [ "10.18653/v1/D18-1199" ] }, "num": null, "urls": [], "raw_text": "Changsheng Liu and Rebecca Hwa. 2018. Heuristi- cally informed unsupervised idiom usage recogni- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1723-1731, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "context2vec: Learning generic context embedding with bidirectional LSTM", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "51--61", "other_ids": { "DOI": [ "10.18653/v1/K16-1006" ] }, "num": null, "urls": [], "raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51-61, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Idiom token classification using sentential distributed semantics", "authors": [ { "first": "Giancarlo", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Ross", "suffix": "" }, { "first": "John", "middle": [], "last": "Kelleher", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "194--204", "other_ids": { "DOI": [ "10.18653/v1/P16-1019" ] }, "num": null, "urls": [], "raw_text": "Giancarlo Salton, Robert Ross, and John Kelleher. 2016. Idiom token classification using sentential distributed semantics. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 194-204, Berlin, Germany. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Still a pain in the neck: Evaluating text representations on lexical composition. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "7", "issue": "", "pages": "403--419", "other_ids": { "DOI": [ "10.1162/tacl_a_00277" ] }, "num": null, "urls": [], "raw_text": "Vered Shwartz and Ido Dagan. 2019. Still a pain in the neck: Evaluating text representations on lexical com- position. Transactions of the Association for Com- putational Linguistics, 7:403-419.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "html": null, "type_str": "table", "content": "
Dataset # expressions # tokens % idiomatic
EN-DEV1459460.9
EN-TEST1461363.3
RUSSIAN3777554.3
", "text": "annotated as literal or idiomatic.4 VNC-Tokens includes DEV (development) andTEST sets -referred to here as EN-DEV and EN-TEST to distinguish them from the Russian dataset introduced below -which each include roughly 600 instances of 14 VNC types. The expressions in EN-DEV and EN-TEST do not overlap. Each of EN-DEV and EN-TEST is roughly balanced with respect to idiomatic and literal instances. We use EN-DEV for hyper-parameter tuning, and carry out no such tuning on EN-TEST.For Russian, we use the dataset ofAharodnik et al. (2018) which consists of instances of Russian PIEs annotated at the token level as literal or idiomatic. Unlike the English dataset, this dataset is not restricted to VNCs. It includes id-The number of PIE types and tokens, and the percentage of idiomatic tokens, in each dataset." }, "TABREF1": { "num": null, "html": null, "type_str": "table", "content": "
SetupModel\u2212CFEN-DEV+CF\u2212CFEN-TEST+CF
MFC63.463.462.962.9
CForm75.075.071.171.1
AllKing and Cook (2018) 82.5 BERT 90.7 mBERT 84.1 \u00b10.885.6 -81.5 83.8 \u00b11.184.7 -
MFC60.960.963.363.3
CForm73.673.670.070.0
UnseenKing and Cook (2018) 72.3 BERT 83.5 \u00b10.97 83.4 \u00b10.65 78.6 \u00b11.78 79.8 \u00b11.55 76.4 74.6 77.8
RoBERTa81.8 \u00b11.60 82.4 \u00b11.20 82.3 \u00b11.76 80.6 \u00b12.35
mBERT75.4 \u00b11.5-74.3 \u00b12.2-
and RoBERTa on EN-DEV
", "text": "\u00b10.53 90.8 \u00b10.51 89.3 \u00b11.11 89.8 \u00b10.71 RoBERTa 88.3 \u00b10.96 89.9 \u00b10.66 88.6 \u00b10.87 89.0 \u00b10.48" }, "TABREF2": { "num": null, "html": null, "type_str": "table", "content": "", "text": "% accuracy and standard deviation for the all and unseen expressions experimental setups on EN-DEV and EN-TEST, for BERT, RoBERTa, and mBERT, with and without the CF feature. % accuracy for the baselines is also shown. The best accuracy for each experimental setup, on each dataset, with and without the CF feature, is shown in boldface." }, "TABREF6": { "num": null, "html": null, "type_str": "table", "content": "
", "text": "% accuracy and standard deviation for cross-lingual experiments from English to Russian (top panel) and Russian to English (bottom panel) using mBERT, a most-frequent class (MFC) baseline, and for English, the unsupervised CForm baseline." } } } }