{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:25.163703Z" }, "title": "It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT", "authors": [ { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "" }, { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "shauli.ravfogel@gmail.com" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "yanaiela@gmail.com" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": {} }, "email": "yoav.goldberg@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information embedded in mBERT and present two simple methods that expose remarkable translation capabilities with no finetuning. The results suggest that most of this information is encoded in a non-linear way, while some of it can also be recovered with purely linear tools. As part of our analysis, we test the hypothesis that mBERT learns representations which contain both a languageencoding component and an abstract, crosslingual component, and explicitly identify an empirical language-identity subspace within mBERT representations.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information embedded in mBERT and present two simple methods that expose remarkable translation capabilities with no finetuning. The results suggest that most of this information is encoded in a non-linear way, while some of it can also be recovered with purely linear tools. As part of our analysis, we test the hypothesis that mBERT learns representations which contain both a languageencoding component and an abstract, crosslingual component, and explicitly identify an empirical language-identity subspace within mBERT representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multilingual-BERT (mBERT) is a version of BERT (Devlin et al., 2019) , trained on the concatenation of Wikipedia in 104 different languages. Recent works show that it excels in zero-shot transfer between languages, for a variety of tasks (Pires et al., 2019; Muller et al., 2020) , despite being trained with no parallel supervision.", "cite_spans": [ { "start": 47, "end": 68, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 238, "end": 258, "text": "(Pires et al., 2019;", "ref_id": "BIBREF14" }, { "start": 259, "end": 279, "text": "Muller et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work has mainly focused on what is needed for zero-shot transfer to work well (Muller et al., 2020; Karthikeyan et al., 2020; Wu and Dredze, 2019) , and on characterizing the representations of mBERT (Singh et al., 2019) . However, we still lack a proper understanding of this model.", "cite_spans": [ { "start": 87, "end": 108, "text": "(Muller et al., 2020;", "ref_id": "BIBREF11" }, { "start": 109, "end": 134, "text": "Karthikeyan et al., 2020;", "ref_id": "BIBREF6" }, { "start": 135, "end": 155, "text": "Wu and Dredze, 2019)", "ref_id": "BIBREF21" }, { "start": 209, "end": 229, "text": "(Singh et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we study (1) how much word-level translation information is recoverable by mBERT; and (2) how this information is stored. We focus on the representations of the last layer, and on the embedding matrix that is shared between the input and output layers -which are together responsible for token prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For our first goal, we start by presenting a simple and strong method to extract word-level translation information. Our method is based on explicit querying of mBERT: given a source word and a target language, we feed mBERT with a template such as \"The word 'SOURCE' in LANGUAGE is: [MASK] .\" where LANGUAGE is the target language, and SOURCE is an English word to translate. Getting the correct translation as the prediction of the masked token exposes mBERT's ability to provide word-level translation. This template-based method is surprisingly successful, especially considering the fact that no parallel supervision was provided to the model while training, and that word translation is not part of the training objective.", "cite_spans": [ { "start": 284, "end": 290, "text": "[MASK]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This raises the possibility of easy disentanglement between language identity and lexical semantics in mBERT representations. We test this hypothesis by trying to explicitly disentangle languageidentity from lexical semantics under linearity assumptions. We propose a method for disentangling a language-encoding component and a languageneutral component from both the embedding representations and word-in-context representations. Furthermore, we learn the emperical \"langauge subspace\" in mBERT, which is a linear subspace that is spanned by all directions that are linearly correlative to the language identity. We demonstrate that the representations are well-separated by language on that subspace.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We leverage these insights and empirical results to show that it is possible to perform analogiesbased translation by taking advantage of this disentanglement: we can alter the language-encoding component, while keeping the lexical component intact. We compare between the template-based method and the analogies-based method and discuss their similarities and differences, as well as their limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The two methods together show that mBERT acquired, to a large degree, the ability to perform word-level translation, despite the fact that it is not trained on any parallel data explicitly. The results suggest that most of the information is stored in a non-linear way, but with some linearly-recoverable components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contribution in this work is two-fold: (a) we present two simple methods for word-level translation using mBERT, that require no training or finetuning of the model, which demonstrate that mBERT stores parallel information in different languages; (b) we show that mBERT representations are composed of language-encoding and languageneutral components and present a method for extracting those components. Our code is available at https://github.com/gonenhila/mbert.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pires et al. (2019) begin a line of work that studies mBERT representations and capabilities. In their work, they inspect the model's zero-shot transfer abilities using different probing experiments, and propose a way to map sentence representations in different languages, with some success. Karthikeyan et al. (2020) further analyze the properties that affect zero shot transfer by experimenting with bilingual BERTs on RTE (recognizing textual entailment) and NER. They analyse performance with respect to linguistic properties and similarities of the source and the target languages, and some parameters of the model itself (e.g. network architecture and learning objective). In a closely related work, Wu and Dredze (2019) perform transfer learning from English to 38 languages, on 5 tasks (POS, parsing, NLI, NER, Document classification), and report good results. Additionally, they show that language-specific information is preserved in all layers. Wang et al. (2019) learn alignment between contextualized representations, and use it for zero shot transfer.", "cite_spans": [ { "start": 293, "end": 318, "text": "Karthikeyan et al. (2020)", "ref_id": "BIBREF6" }, { "start": 958, "end": 976, "text": "Wang et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "Beyond focusing on zero-shot transfer abilities, an additional line of work studies the representations of mBERT and the information it stores. Using hierarchical clustering based on the CCA similarity scores between languages, Singh et al. (2019) are able to construct a tree structure that faithfully describes relations between languages. Chi et al. (2020) learn a linear syntax-subspace in mBERT, and point out to syntactic regulartieis in the representations that transfer across languages. In the recent work of Cao et al. (2020) , the authors define the notion of contextual word alignment. They design a fine-tuning loss for improving alignments and show that they are able to improve zero-shot transfer after this alignment-based fine-tuning. One main difference from our work is that they fine-tune the model according to their new definition of contextual alignment, while we analyze and use the information already stored in the model. One of the closest works to ours is that of Libovick\u1ef3 et al. (2019) , where they assume that mBERT's representations have a language-neutral component, and a language-specific component. They remove the language specific component by subtracting the centroid of the language from the representations, and make an attempt to prove the assumption by using probing tasks on the original vs. new representations. They show that the new representations are more language-neutral to some extent, but lack experiments that show a complementary component. While those works demonstrate that mBERT representations in different languages can be aligned successfully with appropriate supervision, we propose an explicit decomposition of the representations to language-encoding and language-neutral components, and also demonstrate that implicit word-level translations can be easily distilled from the model when exposed to the proper stimuli.", "cite_spans": [ { "start": 228, "end": 247, "text": "Singh et al. (2019)", "ref_id": "BIBREF17" }, { "start": 342, "end": 359, "text": "Chi et al. (2020)", "ref_id": "BIBREF1" }, { "start": 518, "end": 535, "text": "Cao et al. (2020)", "ref_id": "BIBREF0" }, { "start": 992, "end": 1015, "text": "Libovick\u1ef3 et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "3 Word-level Translation using Pre-defined Templates", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We study the extent to which it is possible to extract word-level translation directly from mBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous Work", "sec_num": "2" }, { "text": "We present a simple and overwhelmingly successful method for word-level translation with mBERT. This method is based on the idea of explicitly querying mBERT for a translation, similar to what has been done with LMs for other tasks (Petroni et al., 2019; Talmor et al., 2019) . We experimented with seven different templates and found the following to work best: \"The word 'SOURCE' in LANGUAGE is: [MASK] .\" 1 The predictions from the [MASK] token induce a distribution over the vocabulary, and we take the most probable word as the translation. Table 1 : Word-level translation results with the template-based method and the analogies-based method (introduced in Section 5). @1-100 stand for accuracy@k (higher is better), \"rank\" stands for the average rank of the correct translation, \"log\" stands for the log of the average rank, and \"win\" stands for the percentage of cases in which the tested method is strictly better than the baseline.", "cite_spans": [ { "start": 232, "end": 254, "text": "(Petroni et al., 2019;", "ref_id": "BIBREF13" }, { "start": 255, "end": 275, "text": "Talmor et al., 2019)", "ref_id": "BIBREF18" }, { "start": 398, "end": 404, "text": "[MASK]", "ref_id": null } ], "ref_spans": [ { "start": 546, "end": 553, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word-level Translation: You Just Have to Ask", "sec_num": "3.1" }, { "text": "To evaluate lexical translation quality, we use NorthEuraLex 2 (Dellert et al., 2019) , a lexical database providing translations of 1016 words into 107 languages. We use these parallel translations to evaluate our translation method when translating from English to other target languages. 3 We restrict our evaluation to a set of common languages from diverse language families: Russian, French, Italian, Dutch, Spanish, Hebrew, Turkish, Romanian, Korean, Arabic and Japanese. We omit cases in which the source word or the target word are tokenized using mBERT into more than a single token. 4 The words in the dataset are from different POS, with Nouns, Adjectives and Verbs being the most common ones. 5 For all our experiments with mBERT, we use the transformer library of HuggingFace (Wolf et al., 2019) .", "cite_spans": [ { "start": 63, "end": 85, "text": "(Dellert et al., 2019)", "ref_id": "BIBREF3" }, { "start": 291, "end": 292, "text": "3", "ref_id": null }, { "start": 594, "end": 595, "text": "4", "ref_id": null }, { "start": 790, "end": 809, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.2" }, { "text": "We report accuracy@k in translating the English source word into different languages (for k \u2208 {1, 10, 100}): for each word pair, we check whether the target word was included in the first k retrieved words. Note that we remove the source word itself from the ranking. 6 We report three additional metrics: (a) avg-rank: the average rank of the target word (its position in the ranking of pre-@1 @5 @10 @50 @100 Table 2 : Word-level translation results per POS with the template-based method. The numbers in parathesis relate to the baseline. dictions); (b) avg-log-rank: the average of the log of the rank, to limit the effect of cases in which the rank is extremely low and skews the average; (c) hard-win: percent of cases in which the method results in a strictly better rank for the translated word compared to the baseline. We take the predictions we get for the masked token as the method's candidates for translation. As a baseline, we take the embedding representation of the source word and look for the closest words to it. Table 1 shows the results of the template-based method and the baseline. This method significantly improves over the baseline in all metrics and achieves impressive accuracy results: acc@1 of 0.449 and acc@10 of 0.703, beating the baseline in 91.6% of the cases.", "cite_spans": [ { "start": 268, "end": 269, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 411, "end": 418, "text": "Table 2", "ref_id": null }, { "start": 1034, "end": 1041, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Accuracy per POS To get a finer analysis of this method, we also evaluate the translations per POS. We report results on the 3 most common POS: nouns, adjectives and verbs. 7 As one might expect, nouns are the easiest to translate (both for the baseline and for our method), followed by adjectives, then verbs. See Table 2 for full results. Note that the results for these common POS tags are lower than the average over the full dataset. We hypothesize that words belonging to closed-class POS tags, such as pronouns, are easier to translate.", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 322, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "To further understand the mechanism of the method, we turn to inspect the resulting representations. For each word pair, we feed mBERT with the full template and extract the last-layer representation of the masked token, right before the multiplication with the output embeddings. In cording to the target language. The ability of these representations to encode the target language may explain how this method successfully produces the translation into the correct language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualization of the Representation Space", "sec_num": "3.3" }, { "text": "Due to the representations clustering based on the target language (rather than semantics), we hypothesize that mBERT is also capable of predicting the target language given the source word and its translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting the Language", "sec_num": "3.4" }, { "text": "To verify that, we take the same template as before, this time masking the name of the language instead of the target word. 8 We then compute acc@1,5,10 for all languages and report that for the 20 languages with the most accurate results in Table 3 (the full results can be found in Table 8 in the Appendix). The results are impressive, suggesting that mBERT indeed encodes the target language identity in this setting. The languages on which mBERT is most accurate are either widely-spoken languages (e.g. German, French), or languages with a unique script (e.g. Greek, Russian, Arabic). Indeed, we get a Spearman correlation of 0.53 between acc@1 and the amount of training data in each language. 9 We also compute a confusion matrix for the 20 most accurate languages, shown in Figure 2 . In order to better identify the nuances, we use the squareroot of the values, instead of the values themselves, and remove English (which is frequently predicted 8 We use all languages in NortEuraLex that are a single token according to mBERT tokenization -there are 47 such languages except for English. 9 We considered the number of articles per language from Wikipedia:", "cite_spans": [ { "start": 124, "end": 125, "text": "8", "ref_id": null }, { "start": 700, "end": 701, "text": "9", "ref_id": null }, { "start": 955, "end": 956, "text": "8", "ref_id": null }, { "start": 1098, "end": 1099, "text": "9", "ref_id": null } ], "ref_spans": [ { "start": 284, "end": 291, "text": "Table 8", "ref_id": null }, { "start": 782, "end": 790, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Predicting the Language", "sec_num": "3.4" }, { "text": "https://en.wikipedia.org/wiki/ Wikipedia:Multilingual_statistics, recorded in May 2020. as the target language, probably since the template is in English). The confusion matrix reveals the expected behavior -mBERT confuses mainly between typologically related languages, specifically those of the same language family: Germanic languages (German, Dutch, Swedish, Danish), Romance languages (French, Latin, Italian, Spanish, Portuguese), and Semitic languages (Arabic, Hebrew). In addition, we can also identify some confusion between Germanic and Romance languages (which share much of the alphabet), as well as overprediction of languages with a lot of training data (e.g. German, French).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting the Language", "sec_num": "3.4" }, { "text": "In the previous section, we saw that mBERT contains abundant word-level translation knowledge. How is this knowledge represented? We turn to analyze both the representations of words in context and those of the output embeddings. It has been assumed in previous work that the representations are composed of a languageencoding component and a language-neutral component (Libovick\u1ef3 et al., 2019) . In what follows, we explicitly try to find such a decomposition: we variant to language identity. Specifically, we test the hypothesis using the following interventions:", "cite_spans": [ { "start": 370, "end": 394, "text": "(Libovick\u1ef3 et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "decompose v = v lang + v lex ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "\u2022 Measuring the degree to which removing v lang results in language-neutral word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "\u2022 Measuring the degree to which removing v lex results in word representations which are clustered by language identity (regardless of lexical semantics).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "\u2022 Removing the v lang component from word-incontext representations and from the output embeddings, to induce MLM prediction in other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "Splitting the representations into components is done using INLP (Ravfogel et al., 2020) , an algorithm for removing information from vector representations.", "cite_spans": [ { "start": 65, "end": 88, "text": "(Ravfogel et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Dissecting mBERT Representations", "sec_num": "4" }, { "text": "We formalize the decomposition objective defined earlier as finding two linear subspaces within the representation space, which contain languageindependent and language-identity features. The recently proposed Iterative Null-space Projection (INLP) method (Ravfogel et al., 2020) allows to remove linearly-decodable information from vector representations. Given a dataset of representations X (in our case, mBERT word-in-context representations and output embeddings) and annotations Z for the information to be removed (language identity) the method renders Z linearly unpredictable from X. It does so by iteratively training linear predictors w 1 , . . . , w n of Z, calculating the projection matrix onto their nullspace P N := P N (w 1 ), . . . , P N (w n ), and transforming X \u2190 P N X. Recall that by the nullsapce definition this guarantees w i P N X = 0, \u2200w i , i.e., the features w i uses for language prediction are neutralized. While the nullsapce N (w 1 , . . . , w n ) is a subspace in which Z is not linearly predictable, the complement rowspace R(w 1 , . . . , w n ) is a subspace of the representation space X that corresponds to the property Z. In our case, this subspace is mBERT language-identity subspace. In the following sections we utilize INLP in two complementary ways: (1) we use the null-space projection matrix P N to zero out the language identity subspace, in order to render the representations invariant to language identity 10 ; and (2) we use the rowspace projection matrix P R = I \u2212 P N to project mBERT representations onto the languageidentity subspace, keeping only the parts that are useful for language-identity prediction. We hypothesize that the first operation would render the representations more language-neutral, while the latter would discard the components that are shared across languages.", "cite_spans": [ { "start": 256, "end": 279, "text": "(Ravfogel et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "mBERT Decomposition by Nullspace Projections", "sec_num": "4.1" }, { "text": "We start by applying INLP on random representations and getting the two mentioned projection matrices: on the nullspace, and on the rowspace. We repeat this process twice: first, for representations in context, and second, for output embeddings. For each of these two cases, we sample random tokens from 5000 sentences 11 in 15 different languages, extract their respective representations (in context or simply output embeddings), and run INLP on those representations with the objective of identifying the language, for 20 iterations. We end up with 4 matrices: projection matrix on the null-space and on the rowspace for representations in context, and the same for output embeddings. TED corpus For the experiments depicted in Sections 4 and 5, we use a dataset of transcripts of TED talks in 60 languages, collected by Ye et al. (2018) 12 . For the INLP trainings, we use the 15 most frequent languages in the dataset after basic filtering, 12 of which are also included in NorthEuraLex.", "cite_spans": [ { "start": 824, "end": 840, "text": "Ye et al. (2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": null }, { "text": "We aim to use INLP nullspace and rowspace projection matrices as an intervention that is designed to test the hypothesis on the exsitence of two independent subspaces in mBERT. Concretely, we perform two experiments: (a) a cluster analysis, using t-SNE (Maaten and Hinton, 2008) and a clustercoherence measure, of representations projected on the null-space and the row-space from different languages. We expect to see decreased and increased separation by language identity, respectively; (b) we perform nullsapce projection intervention on both the last hidden state of mBERT, and on the output embeddings, and proceed to predict a distribution over all tokens. We expect that neutralizing the language-identity information this way will encourage mBERT to perform semanticallyadequate word prediction, while decreasing its ability to choose the correct language in the context of the input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language-Neutral and Language-Encoding Representations", "sec_num": "4.2" }, { "text": "To test the hypothesis on the existence of a \"language-identity\" subspace in mBERT, we project the representations of a random subset of words from TED dataset, from the embedding layer and the last layer, on the subspace that is spanned by all language classifiers, using INLP rowspace-projection matrix. Figures 3 and 4 40 20 0 20 40 60 80 100 100 75 50 25 0 25 50 75 ar en es fr he it ja ko nl pt-br ro ru tr zh-cn zh-tw 40 30 20 10 0 10 20 30 present the results for the embedding layer and the last layer, respectively. In both cases, we witness a significant improvement in clustering according to language identity. At the same time, some different trends are observed across layers: the separability is better in the last layer. Romance languages, which share much of the script and some vocabulary, are well separated in the last layer, but less so in the embeddings layer. Taiwanese and mainland Chinese (zh-tw and zh-cn, respectively) are well separable in the last layer, but not in the embedding layer. These findings suggest that the way mBERT encodes language identity differs across layers: while lower layers focus on lexical dimensions -and thus cluster the two Chinese variants, and the Romance languages, together -higher layers separate them, possibly by subtler cues, such as topical differences or syntactic alternations. This aligns with Singh et al. (2019) who demonstrated that mBERT representations become more language-specific along the layers.", "cite_spans": [ { "start": 1408, "end": 1427, "text": "Singh et al. (2019)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 306, "end": 492, "text": "Figures 3 and 4 40 20 0 20 40 60 80 100 100 75 50 25 0 25 50 75 ar en es fr he it ja ko nl pt-br ro ru tr zh-cn zh-tw 40 30 20 10 0 10 20 30", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "t-SNE and Clustering", "sec_num": null }, { "text": "To quantify the influence of the projection to the language rowspace, we calculate V-measure (Rosenberg and Hirschberg, 2007) , which assesses the degree of clustering according to language identity. Specifically, we perform K-means clustering with the number of languages as K, and then calculate V-measure to quantify alignment between the clusters and the language identity. On the embedding layer, this measure increases from 35.5% in the original space, to 61.8% on the languageidentity subspace; and for the last layer, from 80.5% in the original space, to 90.35% in the languageidentity subspace, both showing improved clustering by language identity.", "cite_spans": [ { "start": 93, "end": 125, "text": "(Rosenberg and Hirschberg, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "t-SNE and Clustering", "sec_num": null }, { "text": "When projecting the representations on the nullspace we get the opposite trend: less separation by language-identity. The full results of this complementary projection can be found in Section C in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "t-SNE and Clustering", "sec_num": null }, { "text": "By the disentanglement hypothesis, removing the language-encoding part of the representations should render the prediction language-agnostic. To test that, we take contextualized representations of random tokens in English sentences, and look at the original masked language model (MLM) predictions over those representations. We then compare these predictions with three variations: (a) when projecting the representations themselves on the null-space of the language-identity subspace, (b) when projecting the output embedding matrix on that null-space, (c) when projecting both the representations and the output embedding matrix on the null-space. In order to inspect the differences in predictions we get, we train a classifier 13 that given the embedding of a word, predicts whether it is in English or not. Then, we compute the percentage of English/non-English words in the top-k predictions for each of the variants. The results are depicted in Table 4 , for k \u2208 1, 5, 10, 20, 50. As expected, when projecting both the representations and the embeddings, we get most predictions that are not in English (the results are the average over 6000 instances).", "cite_spans": [], "ref_spans": [ { "start": 954, "end": 961, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Inducing Language-Neutral Token Predictions", "sec_num": null }, { "text": "The decrease in English predictions can be the result of noise that is introduced by the projection operation. To verify that the influence of the projection is focused on the langauge-identity, and not on the lexical-semantics content of the vectors, we employ a second evaluation that focuses on the semantic coherence of the predictions. We look at the top-10 predictions in each case, and compute the cosine-similarity between the original word in the sentence, and each prediction. We expect the average cosine-similarity to drop significantly if the new predictions are mostly noise. However, if the Table 5 : Average cosine-similarity between the original token and the top-10 MLM predictions for it, when performing INLP on the output embeddings, on the representations in context or on both.", "cite_spans": [], "ref_spans": [ { "start": 606, "end": 613, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Inducing Language-Neutral Token Predictions", "sec_num": null }, { "text": "predictions are reasonably related to the original words, we expect to get a similar average. Since some of the predictions are not in English, we use MUSE cross-lingual embeddings for this evaluation (Conneau et al., 2017) . The results are shown in Table 5 . As expected, the average cosine similarity is almost the same in all cases (the average is taken across the same 6000 instances). To get a sense of the resulting predictions, we show four examples (of different POS) in Table 6 . In all cases most words that were removed from the top-10 predictions are English words, while most new words are translations of the original word into other languages.", "cite_spans": [ { "start": 201, "end": 223, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 251, "end": 258, "text": "Table 5", "ref_id": null }, { "start": 480, "end": 487, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Inducing Language-Neutral Token Predictions", "sec_num": null }, { "text": "In the previous section we established the assumption that mBERT representations are composed of a language-neutral and a language-encoding components. In this section, we present another mechanism for word-translation with mBERT, which is based on manipulating the language-encoding component of the representation, in a similar way to how analogies in word embeddings work (Mikolov et al., 2013b) . This new method has a clear mechanism behind it, and it serves as an additional validation for our assumption about the two independent components. The idea is simple: we create a single vector representation for each language, as explained below. Then, in order to change the embedding of a word in language SOURCE to language TARGET, we simply subtract from it the vector representation of language SOURCE and add the vector representation of language TARGET. Finally, in order to get the translation of the source word into the target language, we multiply the resulting representation by the output embedding matrix to get the closest words to it out of the full vocabulary. Below is a detailed explanation of the implementation. Table 6 : Examples of resulting top-10 MLM predictions before and after performing INLP on both the output embeddings and representations in context. Words in red (italic) appear only in the \"before\" list, while words in blue (underlined) appear only in the \"after\" list.", "cite_spans": [ { "start": 375, "end": 398, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 1135, "end": 1142, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analogies-based Translation", "sec_num": "5" }, { "text": "We start by extracting sentences in each language. From each sentence, we choose a random token and extract its representation from the output embedding matrix. Then, for each language we average all the obtained representations, to create a single vector representing that language. For that we use the same representations extracted for training INLP, as described in Section 4.1. Note that no hyper-parameter tuning was done when calculating these language vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating language-representation vectors", "sec_num": "5.1" }, { "text": "The assumption here is that when averaging this way, the lexical differences between the representations cancel out, while the shared language component in all of them persists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating language-representation vectors", "sec_num": "5.1" }, { "text": "We are interested in translating words from a SOURCE language to a TARGET language. For that we simply take the word embedding of the SOURCE word, subtract the representation of the SOURCE language from it, and add the representation of the TARGET language. We multiply this new representation by the output embedding matrix to get a ranking over all the vocabulary, from the closest word to it to the least close.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performing Translation with Analogies", "sec_num": "5.2" }, { "text": "In Table 1 we report the results of translation using analogies (second row). The success of this method supports the reasoning behind it -indeed changing the language component of the representation enables us to get satisfactory results in wordlevel translation. While the template-based method, which is non-linear, puts a competitive lower bound on the amount of parallel information embedded in mBERT, this strictly linear method is able to recover a large portion of it.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.3" }, { "text": "In contrast to the template-based method, t-SNE visualization of the analogies-based translation vectors reveals low clustering by language (see Figure 8 in Section D in the Appendix).", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Visualization of the Representation Space", "sec_num": null }, { "text": "The analogies-based translation method can be easily applied to all language pairs, by subtracting the representation vector of the source language and adding that of the target language. Figure 5 presents a heatmap of the acc@10 for every language pair, with source languages on the left and target languages at the bottom. We note the high translation scores between related languages, for example, Arabic and Hebrew (both ways), and French, Spanish and Italian (all pairs).", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 196, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Translation between every Language Pair", "sec_num": null }, { "text": "The template-based method we presented is nonlinear and puts a high lower bound on the amount of parallel information found in mBERT, with surprisingly good results on word-level translation. The analogies-based method also gets impressive results, but to a lesser extent than the template-based one. In addition, the resulting representations in the analogies-based method are much less structured. These together suggest that most of the parallel information is not linearly decodable from mBERT. The reasoning behind the analogies-based method is very clear: under linearity assumption Figure 5 : Acc@10 of word-level translation using analogies-based method, with source languages on the rows, and target languages on the columns.", "cite_spans": [], "ref_spans": [ { "start": 589, "end": 597, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "we explicitly characterize and compute the decomposition to language-encoding and languageneutral components, and derive a word-level translation method based on this decomposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The mechanism behind the template-based method and the source of its success, however, are much harder to understand and interpret. While it is possible that some parallel data, in one form or another, is present in the training corpora, this is still an implicit signal: there is no explicit supervision for the learning of translation. The fact that MLM training is sufficient -at least to some degree -to induce learning of the algorithmic function of translation (without further supervised finetuning) is nontrivial. We believe that the success of this method is far from being obvious. We leave further investigation of the sources of this success to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We aim to shed light on a basic question regarding multilingual BERT: How much word-level translation information does it embed and what are the ways to extract it? answering this question can help understand the empirical findings on its impressive transfer ability across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We show that the knowledge needed for wordlevel translation is implicitly encoded in the model, and is easy to extract with simple methods, without fine-tuning. This information is likely stored in a non-linear way. However, some parts of this representations can be recovered linearly: we identify an empirical language-identity subspace in mBERT, and show that under linearity assumptions, the representations in different languages are easily separable in that subspace; neutralizing the language-identity subspace encourages the model to perform word predictions which are less sensitive to language-identity, but are nonetheless semantically-meaningful. We argue that the results of those interventions support the hypothesis on the existence of identifiable language components in mBERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "for language-identity prediction. As expected, the nullspace does not encode language identity: Vmeasure drops to 11.5% and 11.4% in the embeddings layer and in the last layer, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We plot the t-SNE projection of the representations of the analogies-based method (after subtraction and addition of the language vectors), colored by target language. While the representations of the template-based method are clearly clustered according to the target language, the representations in this method are completely mixed, see Figure 8 . ", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 8", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "D Visualization of the Representation Space", "sec_num": null }, { "text": "where SOURCE is the word we wish to translate and [MASK] is the special token that BERT uses as an indication for word prediction, see Section A in the Appendix for the other templates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://northeuralex.org/3 We use this data also for experimenting with source languages other than English, in Section 5.4 Number of translated pairs we are left with in each language:Russian: 224, French: 429, Italian: 352, Dutch: 347, Spanish: 452, Hebrew: 158, Turkish: 199, Romanian: 243, Korean: 42, Arabic: 191, Japanese: 214. 5 Number of words from each POS: 'N': 480, 'V': 340, 'A': 102, 'ADV': 47, 'NUM': 22, 'PRN': 9, 'PRP': 7, 'FADV': 4, 'FPRN': 2, 'CNJ': 2, 'FNUM': 1.6 This removal mainly affects acc@1, since in many cases, the first retrieved word is the source word. This is a common practice, especially for the analogies-based method(Mikolov et al., 2013a).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We have 1765, 363, 323 instances, respectively. Other POS have less than 200 instances, and are thus omitted from the analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "to the extent that language identity is indeed encoded in a linear subspace, and that INLP finds this subspace.11 For the output embeddings, we exclude tokens that start with \"##\", for the last layer representations, sampled tokens may include \"CLS\" or \"SEP\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/neulab/ word-embeddings-for-nmt", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "SCIKIT-LEARN implementation(Pedregosa et al., 2011) with default parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEX-TRACT), and from the the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation programme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "The different templates are listed in Table 7 , from the best performing to least performing. We report the results throughout the paper using the best template (first). Templates 5-7 fail completely, while templates 2-4 result in reasonable accuracy.1 \"The word 'SOURCE' in LANGUAGE is: [MASK] .\" 2 \"'SOURCE' in LANGUAGE is: [MASK] .\" 3 \"Translate the word 'SOURCE' into LANGUAGE: [MASK] .\" 4 \"What is the meaning of the LANGUAGE word [MASK]? 'SOURCE'.\" 5 \"What is the translation of the word 'SOURCE' into LANGUAGE? [MASK] .\" 6 \"The translation of the word 'SOURCE' into LANGUAGE is [MASK] .\" 7 \"How do you say 'SOURCE' in LANGUAGE? [MASK] .\" Table 8 depicts the results of language prediction from the template. We report acc@1,5,10 for all languages. In figures 6 and 7 we present a t-SNE projection of the representations in the embeddings layer and the last layer, projected onto INLP nullsapce -a subspace which discards information relevant languages acc@1 acc@5 acc@ Table 8 : Prediction accuracy of the language, when the language is masked in the template.", "cite_spans": [ { "start": 288, "end": 294, "text": "[MASK]", "ref_id": null }, { "start": 326, "end": 332, "text": "[MASK]", "ref_id": null }, { "start": 382, "end": 388, "text": "[MASK]", "ref_id": null }, { "start": 518, "end": 524, "text": "[MASK]", "ref_id": null }, { "start": 585, "end": 591, "text": "[MASK]", "ref_id": null }, { "start": 635, "end": 641, "text": "[MASK]", "ref_id": null } ], "ref_spans": [ { "start": 38, "end": 45, "text": "Table 7", "ref_id": null }, { "start": 645, "end": 652, "text": "Table 8", "ref_id": null }, { "start": 976, "end": 983, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "A Templates", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multilingual alignment of contextual word representations", "authors": [ { "first": "Steven", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.03518" ] }, "num": null, "urls": [], "raw_text": "Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Mul- tilingual alignment of contextual word representa- tions. arXiv:2002.03518.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Finding universal grammatical relations in multilingual BERT", "authors": [ { "first": "Ethan", "middle": [ "A" ], "last": "Chi", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan A. Chi, John Hewitt, and Christopher D. Man- ning. 2020. Finding universal grammatical relations in multilingual BERT. CoRR, abs/2005.04511.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.04087" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv:1710.04087.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Northeuralex: a widecoverage lexical database of northern eurasia. Language Resources and Evaluation", "authors": [ { "first": "Johannes", "middle": [], "last": "Dellert", "suffix": "" }, { "first": "Thora", "middle": [], "last": "Daneyko", "suffix": "" }, { "first": "Alla", "middle": [], "last": "M\u00fcnch", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Ladygina", "suffix": "" }, { "first": "Armin", "middle": [], "last": "Buch", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Clarius", "suffix": "" }, { "first": "Ilja", "middle": [], "last": "Grigorjew", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Balabel", "suffix": "" }, { "first": "Hizniye", "middle": [ "Isabella" ], "last": "Boga", "suffix": "" }, { "first": "Zalina", "middle": [], "last": "Baysarova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "1--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Dellert, Thora Daneyko, Alla M\u00fcnch, Alina Ladygina, Armin Buch, Natalie Clarius, Ilja Grigor- jew, Mohamed Balabel, Hizniye Isabella Boga, Za- lina Baysarova, et al. 2019. Northeuralex: a wide- coverage lexical database of northern eurasia. Lan- guage Resources and Evaluation, pages 1-29.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "How can we know what language models know?", "authors": [ { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "F", "middle": [], "last": "Frank", "suffix": "" }, { "first": "", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.12543" ] }, "num": null, "urls": [], "raw_text": "Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2019. How can we know what language models know? arXiv preprint arXiv:1911.12543.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Cross-lingual ability of multilingual bert: An empirical study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study. In International Con- ference on Learning Representations.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "How language-neutral is multilingual bert", "authors": [ { "first": "Jind\u0159ich", "middle": [], "last": "Libovick\u1ef3", "suffix": "" }, { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03310" ] }, "num": null, "urls": [], "raw_text": "Jind\u0159ich Libovick\u1ef3, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv:1911.03310.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of machine learning research", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "1st International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Linguistic regularities in continuous space word representations", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference of the north american chapter of the association for computational linguistics: Human language technologies", "volume": "", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 conference of the north american chapter of the as- sociation for computational linguistics: Human lan- guage technologies, pages 746-751.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Can multilingual language models transfer to an unseen dialect? a case study on north african arabizi", "authors": [ { "first": "Benjamin", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Beno\u02c6\u0131t", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djame", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00318" ] }, "num": null, "urls": [], "raw_text": "Benjamin Muller, Beno\u02c6\u0131t Sagot, and Djame Seddah. 2020. Can multilingual language models transfer to an unseen dialect? a case study on north african ara- bizi. arXiv:2005.00318.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Language models as knowledge bases?", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2463--2473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "How multilingual is multilingual BERT?", "authors": [ { "first": "Telmo", "middle": [], "last": "Pires", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Schlinger", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4996--5001", "other_ids": { "DOI": [ "10.18653/v1/P19-1493" ] }, "num": null, "urls": [], "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Null it out: Guarding protected attributes by iterative nullspace projection", "authors": [ { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Hila", "middle": [], "last": "Gonen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Twiton", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. CoRR, abs/2004.07667.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", "authors": [ { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "410--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In EMNLP-CoNLL 2007, Proceedings of the 2007 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, June 28-30, 2007, Prague, Czech Republic, pages 410- 420. ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BERT is not an interlingua and the bias of tokenization", "authors": [ { "first": "Jasdeep", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mccann", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "47--55", "other_ids": { "DOI": [ "10.18653/v1/D19-6106" ] }, "num": null, "urls": [], "raw_text": "Jasdeep Singh, Bryan McCann, Richard Socher, and Caiming Xiong. 2019. BERT is not an interlingua and the bias of tokenization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 47-55, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "olmpics -on what language model pre-training captures", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. olmpics -on what language model pre-training captures.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Cross-lingual BERT transformation for zero-shot dependency parsing", "authors": [ { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5721--5727", "other_ids": { "DOI": [ "10.18653/v1/D19-1575" ] }, "num": null, "urls": [], "raw_text": "Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT trans- formation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5721- 5727, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "When and why are pre-trained word embeddings useful for neural machine translation", "authors": [ { "first": "Qi", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Sachan", "middle": [], "last": "Devendra", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Matthieu", "suffix": "" }, { "first": "Padmanabhan", "middle": [], "last": "Sarguna", "suffix": "" }, { "first": "Neubig", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2018, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Ye, Sachan Devendra, Felix Matthieu, Padmanab- han Sarguna, and Neubig Graham. 2018. When and why are pre-trained word embeddings useful for neu- ral machine translation. In HLT-NAACL.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Figure 1 we plot the t-SNE projection (Maaten and Hinton, 2008) of those representations, colored by language. The representations clearly cluster ac-Figure 1: t-SNE projections of the representations of the template-based method.", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Confusion matrix of language prediction when the language is masked in the template. 20 most accurate languages are included, English is omitted.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": ": t-SNE projection of the output embeddings of random words from different languages, originally (left) and after projection onto the language-identity subspace (right).", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": ": t-SNE projection of last-hidden-layer representation of random words from different languages, originally (left) and after projection onto the language-identity subspace (right).", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "t-SNE projections of the representations of the analogies-based method.", "num": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
greek | |||||||||||||||||||
russian | |||||||||||||||||||
arabic | |||||||||||||||||||
hebrew | |||||||||||||||||||
german | |||||||||||||||||||
dutch | |||||||||||||||||||
swedish | |||||||||||||||||||
danish | |||||||||||||||||||
japanese | |||||||||||||||||||
korean | |||||||||||||||||||
french | |||||||||||||||||||
latin | |||||||||||||||||||
italian | |||||||||||||||||||
spanish | |||||||||||||||||||
portuguese | |||||||||||||||||||
polish | |||||||||||||||||||
finnish | |||||||||||||||||||
turkish | |||||||||||||||||||
welsh | |||||||||||||||||||
hungarian | |||||||||||||||||||
greek | russian | arabic | hebrew | german | dutch | swedish | danish | japanese | korean | french | latin | italian | spanish | portuguese | polish | finnish | turkish | welsh | hungarian |