{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:08.275733Z" }, "title": "Identifying the Importance of Content Overlap for Better Cross-lingual Embedding Mappings", "authors": [ { "first": "R\u00e9ka", "middle": [], "last": "Cserh\u00e1ti", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Szeged", "location": {} }, "email": "cserhatir@inf.u-szeged.hu" }, { "first": "G\u00e1bor", "middle": [], "last": "Berend", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Szeged", "location": {} }, "email": "berendg@inf.u-szeged.hu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods. We use several measures of corpus and embedding similarity to predict BLI scores of cross-lingual embedding mappings over three types of corpora, three embedding methods and 55 language pairs. Our experimental results corroborate that instead of mere size, the amount of common content in the training corpora is essential. This phenomenon manifests in that i) despite of the smaller corpus sizes, using only the comparable parts of Wikipedia for training the monolingual embedding spaces to be mapped is often more efficient than relying on all the contents of Wikipedia, ii) the smaller, in return less diversified Spanish Wikipedia works almost always much better as a training corpus for bilingual mappings than the ubiquitously used English Wikipedia.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this work, we analyze the performance and properties of cross-lingual word embedding models created by mapping-based alignment methods. We use several measures of corpus and embedding similarity to predict BLI scores of cross-lingual embedding mappings over three types of corpora, three embedding methods and 55 language pairs. Our experimental results corroborate that instead of mere size, the amount of common content in the training corpora is essential. This phenomenon manifests in that i) despite of the smaller corpus sizes, using only the comparable parts of Wikipedia for training the monolingual embedding spaces to be mapped is often more efficient than relying on all the contents of Wikipedia, ii) the smaller, in return less diversified Spanish Wikipedia works almost always much better as a training corpus for bilingual mappings than the ubiquitously used English Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Word embedding methods (e.g. Mikolov et al., 2013b , Pennington et al., 2014 , Bojanowski et al., 2017 have become an essential tool for representing words in most NLP tasks. These algorithms assign a low-dimensional vector to words based on the patterns of their contexts in a training corpus, and this way they locate the words in the vector space in a consistent way, so that words with similar meaning are assigned to similar vectors. Therefore, it can be assumed that the layout of the word vectors are near equivalent in independently trained models, so word embedding models in different languages can be aligned into a common space. Such alignments are a standard way of creating bi-or multilingual word embedding spaces, which are very useful for machine translation and a wide range of cross-lingual NLP tasks.", "cite_spans": [ { "start": 29, "end": 50, "text": "Mikolov et al., 2013b", "ref_id": "BIBREF17" }, { "start": 51, "end": 76, "text": ", Pennington et al., 2014", "ref_id": "BIBREF22" }, { "start": 77, "end": 102, "text": ", Bojanowski et al., 2017", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although large pre-trained language models are superior to traditional word embeddings in many NLP tasks, one strength of these mapping-based methods is their extensive applicability, e.g. for low-resource languages or special domain (e.g. medical) data. Additionally, probably due to significantly lower resource requirements (Strubell et al., 2019) and often competitive results (Litschko et al., 2021) , a large proportion of industrial NLP applications is still based on static word embeddings (Arora et al., 2020) .", "cite_spans": [ { "start": 327, "end": 350, "text": "(Strubell et al., 2019)", "ref_id": "BIBREF27" }, { "start": 381, "end": 404, "text": "(Litschko et al., 2021)", "ref_id": "BIBREF14" }, { "start": 498, "end": 518, "text": "(Arora et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, the results of the mappings and the performance of the multilingual models was still shown to be extremely dependent on the mapping scenario (S\u00f8gaard et al., 2018; . Previous work attributes low performance and non-isomorphism of monolingual embedding models to typological differences between languages, domain differences in the training corpora, insufficient resources, and under-training (Doval et al., 2020; S\u00f8gaard et al., 2018; .", "cite_spans": [ { "start": 160, "end": 182, "text": "(S\u00f8gaard et al., 2018;", "ref_id": "BIBREF26" }, { "start": 411, "end": 431, "text": "(Doval et al., 2020;", "ref_id": "BIBREF7" }, { "start": 432, "end": 453, "text": "S\u00f8gaard et al., 2018;", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We rely on popular bilingual alignment methods, and conduct a thorough analysis of the connection between the evaluation scores of the mappings and language relatedness, isomorphism of the source embeddings measured by several metrics, and some newly identified, easy to calculate corpus properties that are highly predictive of the bilingual mapping performance: the token overlap ratio in the vocabularies, and the distance between word distributions of the corpora. These combined with corpus size surpass existing isomorphism measures as predictors of bilingual mapping score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to help researchers and developers use resources more efficiently, and find the most appropriate setting for creating bilingual word embedding models. 1 2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The success of the pioneering neural word embedding models (Mikolov et al., 2013b) almost immediately led to the idea of creating bilingual models using linear transformations (Mikolov et al., 2013a) . The original problem is finding a mapping that transforms an embedding matrix close to another in a different language. Mikolov et al. (2013a) solve this with stochastic gradient descent, minimizing the squared euclidean distance of word pairs from a seed dictionary.", "cite_spans": [ { "start": 59, "end": 82, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF17" }, { "start": 176, "end": 199, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF16" }, { "start": 322, "end": 344, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping Algorithms", "sec_num": "2.1" }, { "text": "Subsequently, several works improved this mapping method: for example, Xing et al. (2015) normalize the source and target embeddings, and constrain the mapping to be orthogonal; Artetxe et al. (2016) center the mean, and find the transformation in closed form, solving a least squares problem. Later Artetxe et al. (2018a) proposed a multi-step framework consisting of mean-centering, normalization, whitening, and an orthogonal transformation. In contrast, RCSLS (Joulin et al., 2018) is based on relaxing the orthogonal restriction and returning to stochastic gradient descent with a different loss function, aiming to be consistent with the CSLS (Cross-domain Similarity Local Scaling; Conneau et al., 2018) retrieval method.", "cite_spans": [ { "start": 71, "end": 89, "text": "Xing et al. (2015)", "ref_id": "BIBREF30" }, { "start": 178, "end": 199, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF1" }, { "start": 300, "end": 322, "text": "Artetxe et al. (2018a)", "ref_id": "BIBREF2" }, { "start": 464, "end": 485, "text": "(Joulin et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping Algorithms", "sec_num": "2.1" }, { "text": "Additionally, aligning word embeddings in an unsupervised way, without any cross-lingual signal has also become an exciting topic, giving rise to diverse approaches of unsupervised embedding mappings. The first really successful solution by Conneau et al. (2018) is based on adversarial learning; Artetxe et al. (2018b) proposed an iterative self-learning method initialized by sorting embedding values; and Non-adversarial Translation by Hoshen and Wolf (2018) also uses self-learning, but a different method, initialized using PCA.", "cite_spans": [ { "start": 241, "end": 262, "text": "Conneau et al. (2018)", "ref_id": "BIBREF6" }, { "start": 297, "end": 319, "text": "Artetxe et al. (2018b)", "ref_id": "BIBREF3" }, { "start": 439, "end": 461, "text": "Hoshen and Wolf (2018)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Mapping Algorithms", "sec_num": "2.1" }, { "text": "Several works have already analyzed performance of cross-lingual embedding mappings (e.g. Kementchedjhieva et al., 2019; . More related to this paper, reasons why some settings do not work well were also investigated. For instance, S\u00f8gaard et al. (2018) use eigenvector similarity of nearest neighbor graphs to show that the isomorphic assumption does not hold in many cases, and report the negative effect of language and domain dissimilarity on the unsu-pervised embedding alignment method of Conneau et al. (2018) . In addition, states that small corpora and under-training also play a significant role in non-isomorphism of word embeddings. Dubossarsky et al. (2020) also examine isomorphism, and suggest some new measures to quantify transferability of embedding spaces based on their spectral statistics: how similar their singular values are on the one hand, and their individual robustness measured by condition numbers, on the other hand.", "cite_spans": [ { "start": 90, "end": 120, "text": "Kementchedjhieva et al., 2019;", "ref_id": "BIBREF12" }, { "start": 232, "end": 253, "text": "S\u00f8gaard et al. (2018)", "ref_id": "BIBREF26" }, { "start": 495, "end": 516, "text": "Conneau et al. (2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis of Cross-Lingual Word Embeddings", "sec_num": "2.2" }, { "text": "In the rest of this paper, we supplement isomorphism measures with corpus similarity measures, and show that corpus similarity is one of the key factors influencing mapping scores in both supervised and unsupervised cases. We show that two corpora of sufficient size, coming from the same domain (Wikipedia in this case) can still be too different for good mapping scores, while good mappings are possible on relatively small corpora if other important conditions are met.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis of Cross-Lingual Word Embeddings", "sec_num": "2.2" }, { "text": "In our experiments, we compare BLI (Bilingual Lexicon Induction) scores of embeddings trained on three types of corpora, all of them extracted from Wikipedia: 1. We use the full Wikipedia texts 2 of our 11 languages studied: Czech, Danish, German, Greek, English, Spanish, Finnish, Hungarian, Norwegian, Romanian and Turkish. These all come from the same domain (encyclopedia articles), which condition was reported to be necessary for sufficient unsupervised mappings by S\u00f8gaard et al. (2018) . Nevertheless, this type is the least restricted in our experiments, and the sizes may be very dissimilar for different languages, but these are the largest among our experiments. We call this type of corpora loosely-comparable Wikipedia, or L-Wiki for short.", "cite_spans": [ { "start": 472, "end": 493, "text": "S\u00f8gaard et al. (2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "2. Even within the same domain, the content of the corpora may be very different, which might (and, according to our hypothesis, does) have a negative influence on the mappings. Therefore, we create a mildly-comparable (M-Wiki) corpus, separately for all of our 55 lan-guage pairs, by filtering articles with bidirectional cross-language links between the two Wikipedias. This is also expected to make sizes comparable within a language pair, but the different length of the articles may still cause dissimilarity in size. Additionally, the amount of filtered parts between different language pairs is especially variable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "3. In terms of both size and content, a parallel corpus between two languages is as similar as possible. As such, we use the Wiki-Matrix (Schwenk et al., 2021) parallel corpus (hereafter strictly-comparable Wiki or S-Wiki), which is also extracted from Wikipedia. The sizes here are substantially smaller than in the previous types, but also vary by language pairs.", "cite_spans": [ { "start": 137, "end": 159, "text": "(Schwenk et al., 2021)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "This way, we have various levels of corpus size, language relatedness, and proportion of overlapping information among our experimental language pairs. To the best of our knowledge, our experiments are the first to analyze different corpus types, and to dissect the effects of corpus similarity on the quality of bilingual embedding mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.1" }, { "text": "Since there is no available gold standard dictionary for most of our language pairs, we create silver dictionaries from the WikiMatrix parallel corpora using the word2word (Choe et al., 2020) tool, which generates translations for words based on parallel sentences. To ensure the quality of these, we generate two translations for each word above the mean frequency in the corpus, and only keep pairs that are mutual translations of each other. Then we randomly select (disjoint) training and test dictionaries with 3000 and 1000 source words, respectively.", "cite_spans": [ { "start": 172, "end": 191, "text": "(Choe et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training and Test Dictionaries", "sec_num": "3.2" }, { "text": "We train FastText (Bojanowski et al., 2017) word vectors on all of the above corpora using Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010), with the following hyperparameters: dimensions: 300, negative samples: 5, context window: 5, minimum word count: 5, maximum vocabulary size: 200 000.", "cite_spans": [ { "start": 18, "end": 43, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Models", "sec_num": "3.3" }, { "text": "To create the cross-lingual models we use three mapping methods: supervised VecMap (Artetxe et al., 2018a) , RCSLS (Joulin et al., 2018) , and Non-adversarial Translation (NAT; Hoshen and Wolf, 2018) on the embeddings trained on three types of corpora and 110 language pairs, performing a total of 990 mappings as we separate different source-target directions of the same language pair.", "cite_spans": [ { "start": 83, "end": 106, "text": "(Artetxe et al., 2018a)", "ref_id": "BIBREF2" }, { "start": 115, "end": 136, "text": "(Joulin et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Models", "sec_num": "3.3" }, { "text": "We evaluate the models with P@1 scores, i.e. by finding the nearest neighbor of a source word among the target language embeddings, and see whether it is a correct translation according to our dictionary. We experimented with other, more sophisticated evaluation methods as well, but the scores did not change relative to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Embedding Models", "sec_num": "3.3" }, { "text": "We show the distributions of the used mapping algorithms on separate corpus types in Figure 1 . Our first important observation is that the results are much more dependent on the corpus type than on the mapping algorithm. While mappings of mildly and loosely comparable corpora reach similar median scores and extremes, the strictly-comparable (hence a lot smaller) corpus mapping scores range much wider. The median of the S-Wiki scores is very low, but the highest scores and quantiles are in line with the other corpus types. Later we will also investigate in which cases do these mappings perform well, and why.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Mapping Methods", "sec_num": "4.1" }, { "text": "Another information visible in Figure 1 is that in our settings, VecMap performs the best among these three algorithms. However, except some cases where it completely fails, Non-adversarial Translation reaches competitive results to the other methods, despite the lack of supervision. ", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Mapping Methods", "sec_num": "4.1" }, { "text": "The effect of the languages used on the performance of the cross-lingual embeddings has been widely studied Dubossarsky et al., 2020; Doval et al., 2020) , but our evaluation on 110 pairs of 11 languages still shows interesting and instructive patterns. It is conspicuous in Figure 2a that, despite the widespread use of English as a transfer language, mappings of looselycomparable Wikipedia embeddings involving Spanish perform substantially better. In this case, English Wikipedia probably covers too diverse and deep articles that none of the other Wikipedias do, which makes Spanish Wikipedia a better corpus for embedding mappings. However, using mildlycomparable Wikipedia weakens this phenomenon (see Figure 2b) , which might suggest that instead of the corpus size, the real indicator of performance is the amount of overlapping information between the two corpora. We will deal with this hypothesis a lot more in the rest of this article. Figure 3 shows the average performance of loosely-comparable Wikipedia mappings broken down by both source and target language. Beside some other interesting details, an outstanding result of the Danish-Norwegian mapping is clear. In this case, language relatedness, geographical and cultural similarity are all given, therefore we can assume that the two Wikipedias are also very similar in topics, style, editing, etc. This can be considered a case where all the necessary factors are met for obtaining a high-performing mapping.", "cite_spans": [ { "start": 108, "end": 133, "text": "Dubossarsky et al., 2020;", "ref_id": "BIBREF8" }, { "start": 134, "end": 153, "text": "Doval et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 275, "end": 284, "text": "Figure 2a", "ref_id": "FIGREF2" }, { "start": 709, "end": 719, "text": "Figure 2b)", "ref_id": "FIGREF2" }, { "start": 949, "end": 957, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Languages", "sec_num": "4.2" }, { "text": "From this figure it seems that only very close language relatedness is really beneficial, e.g. between Germanic and Romance languages, but Germanic languages, for example, can be mapped to linguistically very distant Finno-Ugrian languages just as well as to non-Germanic Indo-European languages, which might also be a useful observation for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "4.2" }, { "text": "L-Wiki M-Wiki S-Wiki", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "4.2" }, { "text": "Our key observation is that one of the most required condition for good embedding mappings is corpus similarity, more precisely the amount of common contexts the words appear in, as a complement to previous claims pointing to language similarity and corpus size (Dubossarsky et al., 2020; . We introduce two measures to quantify corpus similarity:", "cite_spans": [ { "start": 262, "end": 288, "text": "(Dubossarsky et al., 2020;", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "\u2022 Token Overlap is the ratio of token forms used in both corpora to the number of tokens used in one or both of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "T O(V 1 , V 2 ) = |V 1 \u2229 V 2 | |V 1 \u222a V 2 |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "Most of these shared tokens are probably words of foreign origin, having the same meaning, therefore their presence in large proportions indicates similar content in the texts. However, this measure is affected by language similarity as well, and is unusable with languages written in different scripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "\u2022 As Word Distribution Distance between two corpora we take the normalized frequency distribution of words from our silver dictionary between the languages of the two corpora, and compute Jensen-Shannon divergence between them. This way, we use the words of the dictionary as keywords, and the divergence will be small only if the respective topics appear in a similar proportion in the corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "As showed, mapping scores are greatly influenced by the size of the training corpora, therefore we include this information to our corpus data as well. We examined correlations using the token numbers of the source and the target corpus, the arithmetic and harmonic mean of them, and the minimum of them. The latter of these proved to be the most powerful indicator of mapping scores, so we use the minimum of token numbers of the two training corpora involved in the mapping to represent corpus size. Figure 4 shows the connection of the mapping scores to the above defined corpus similarity measures and corpus size. All of these corpus properties seem to indicate performance well, as models with more overlap in their vocabularies, with more similar word distributions, or trained on a larger corpora perform generally better.", "cite_spans": [], "ref_spans": [ { "start": 502, "end": 510, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "However, the parameters of the regression lines for corpus types differ clearly, implying that when we make the corpora mildly and strictly comparable, so overlap ratio and word distribution similarity increase, the results are not improving as much as we could have expected by extrapolating the scores of the full Wikipedia corpus. But similarly, the smaller size of comparable and parallel corpora does not directly lead to a decrease in performance either.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "It can be clearly seen from Figure 4c , that although there is a connection between corpus size and cross-lingual mapping score as well, big corpora are neither necessary nor sufficient for good results: some of the best scores are reached by mapping embeddings trained on the Danish-Norwegian parallel corpus, having less than 10 million tokens, while the biggest corpora consist of approximately 1 billion tokens. Also, correlations in Table 1 show that the relationship between corpus size and performance gets stronger as the corpus is filtered for overlapping articles (M-Wiki), and even stronger for parallel sentences (S-Wiki), again supporting our hypothesis that the amount of common information is an important factor for mappings. We measured corpus similarity by the number of common tokens and the distribution of dictionary words successfully, but there are probably other, more widely usable measures to be found in future work.", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 37, "text": "Figure 4c", "ref_id": "FIGREF4" }, { "start": 438, "end": 445, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Corpus Size and Similarity", "sec_num": "4.3" }, { "text": "To further validate our statement that common content in corpora greatly influences mapping scores, we conduct controlled experiments, in which we align embeddings of the same language, and con- struct the training corpora from subsets of a single Wikipedia. For 3 languages (English, Spanish and Hungarian) we create corpora in different sizes, and for all of these we select subsets that are matching in size, but contain 0%, 33%, 66%, or 100% of the text in the first corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Overlap", "sec_num": "4.4" }, { "text": "This methodology allows us to examine the effects of size and content overlap explicitly (and exclude the effects of typological differences between languages). These parts, however, may contain very similar articles in the same field, which are not accounted as overlap. Probably this is why small size and zero overlap still yield very high P@1 scores, as shown in Figure 5 . Still, the trends are convincing that content overlap is at least as important as corpus size. We show the scores of the RCSLS mapping, but the same patterns can be seen with other methods.", "cite_spans": [], "ref_spans": [ { "start": 367, "end": 375, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Content Overlap", "sec_num": "4.4" }, { "text": "These results also imply that word embeddings represent word usage of a specific corpus, rather than a whole language, which is often forgotten in multilingual tasks. Therefore, it is possible that a corpus can even be too large compared to another if there are too many different contexts appearing in only one of them, which might explain why Spanish Wikipedia is superior to English as a training corpus. We can conclude that even among corpora of the same domain, corpus similarity is a major requirement for the success of word embedding alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content Overlap", "sec_num": "4.4" }, { "text": "Previous work has extensively studied the (non-) isomorphism of word embeddings, and its effect on bilingual alignments. This problem can be considered one of the core questions of bilingual mappings, since this method gains its inspiration and theoretical validity from the assumption that embeddings trained on different languages should be approximately isomorphic. However, our surprising results show that the degree of isomorphism is generally less correlated to BLI scores than corpus properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Isomorphism", "sec_num": "4.5" }, { "text": "Measuring the degree of isomorphism between word embedding models is an interesting question in itself as well, and several solutions have already been proposed for it. We adopt five existing measures (for more details see the works cited below) and introduce a new one, based on the similarity of words. Table 3 : Pearson correlation coefficients between P@1 scores and isomorphism among mappings scoring 0.6 or higher. We indicate the number of mappings meeting this criterion below the corpus types.", "cite_spans": [], "ref_spans": [ { "start": 305, "end": 312, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Embedding Isomorphism", "sec_num": "4.5" }, { "text": "\u2022 Laplacian Isospectrality (S\u00f8gaard et al., 2018) measures the difference between the Laplacian eigenvalues of word nearest neighbor graphs. We take the average isospectrality of 10 graphs, each constructed of 50 random words from our dictionary and their translations.", "cite_spans": [ { "start": 27, "end": 49, "text": "(S\u00f8gaard et al., 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "\u2022 Singular Value Gap (SVG; Dubossarsky et al., 2020) is the distance between the sorted singular values of the two word embedding matrices.", "cite_spans": [ { "start": 27, "end": 52, "text": "Dubossarsky et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "\u2022 Spectral Condition (Dubossarsky et al., 2020) is the harmonic mean of the condition numbers of the two embedding matrices, which measure their sensitivity to noise.", "cite_spans": [ { "start": 21, "end": 47, "text": "(Dubossarsky et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "\u2022 Effective Spectral Condition (Dubossarsky et al., 2020) is the harmonic mean of the effective condition numbers of the two embedding matrices.", "cite_spans": [ { "start": 31, "end": 57, "text": "(Dubossarsky et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "\u2022 Relational Similarity quantifies how similarly the two models rate the proximity of word pairs. We take 10,000 random word pairs and their translations from our dictionary, and compute the Pearson correlation coefficient between the two lists of cosine similarity scores between the pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "\u2022 Neighbor Overlap quantifies the overlap between the neighborhood of words in the embedding models. We take a word from the dictionary in one language, and find its 10 nearest neighbors among the dictionary entries. Then we count how many of the translations of these appear among the nearest neighbors of the translation of the original word. We repeat this 1000 times, and compute the average of the outcomes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "In Table 2 we show the correlations between the above isomorphism measures and mapping scores. It is interesting that while the connection between corpus properties and bilingual scores was the strongest in strictly-comparable corpora, the opposite seems to be true in this case: the performance of mapping embeddings trained on S-Wiki seems not to be very dependent on isomorphism. Often it even happens that the correlation to embedding similarity/dissimilarity turns into negative/positive in the strictly-comparable case.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "This raises the question if it is possible that two models, in which the same word has different neighbors, are transformed so that the appropriate words still become nearest neighbors, or above a certain score isomorphism remains a requirement for bilingual performance. To answer this, we compute the correlations between isomorphism measures and mapping scores again, but only among mappings with P@1 score 0.6 or higher. Table 3 shows that in these cases performance does indeed depend on isomorphism, especially on Laplacian, relational, and neighbor similarities.", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "We can see that mapping scores are connected to the isomorphism of source embeddings, especially among the relatively well performing models. Therefore we can use both isomorphism and corpus similarity to predict bilingual performance, which we will do in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "L-Wiki M-Wiki S-Wiki", "sec_num": null }, { "text": "In our final experiments, we try to predict the mapping scores from the above studied corpus and isomorphism measures. We make predictions based on our three corpus properties, six isomorphism measures, and all of these combined, using random forest regression with the default parameters in Scikit-learn (Pedregosa et al., 2011) , evaluating the model with the Leave-One-Out method.", "cite_spans": [ { "start": 305, "end": 329, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting BLI Scores", "sec_num": "4.6" }, { "text": "The results in Table 4 show that mapping scores are very well predictable in most cases, but this varies between corpus types and alignment methods. Properties of the corpus, however, are almost always better predictors than isomorphism; the only exception is Non-adversarial Translation of looselycomparable Wikipedia embeddings.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Predicting BLI Scores", "sec_num": "4.6" }, { "text": "Combining corpus and isomorphism measures usually does not lead to an improvement either, which could mean that isomorphism depends on corpus properties as well. To find this out, we make predictions of all isomorphism measures from our three corpus properties, and show the results in Table 5 . From these we see that although isomorphism does not depend solely on corpus similarity, it is also greatly influenced by it.", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 293, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Predicting BLI Scores", "sec_num": "4.6" }, { "text": "It is important to note that we did not use language information at all, therefore these high scores mean that the above corpus measures are more important than language similarity, or at least they carry this information as well. These results again support our observation on the importance of corpus similarity for good performance of bilingual word embedding mappings. Table 4 : R2 scores of the predictions of P@1 scores, using random forest regression based on our three corpus properties combined, six isomorphism measures combined, and all of these.", "cite_spans": [], "ref_spans": [ { "start": 373, "end": 380, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Predicting BLI Scores", "sec_num": "4.6" }, { "text": "We examined the connection of embedding mapping scores to languages, corpus properties, and embedding isomorphism. We found that the Spanish Wikipedia is better for this purpose than the English Wikipedia, often used by default. This is explained by our other experiments on the relationship of corpus properties and mapping quality, where it turned out that corpus similarity is at least as important as corpus size, therefore the hugeness and wide diversity of the English Wikipedia can be harmful. Moreover, we have seen that language similarity is really beneficial for very closely related languages only, e.g. between Germanic or Romance languages. Mapping scores are well predictable even without any information about the languages, based on three properties of the corpora: corpus size, proportion of common tokens, and distance of the word distributions. These data also surpass existing embedding isomorphism measures as predictors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "On the other hand, this paper focuses on BLI scores only, which were shown to not correlate perfectly with bilingual performance on downstream tasks . We suppose that to some extent our findings hold in downstream situations as well, since downstream performance cannot be independent of BLI scores, but this question should be part of further research. The main difference between downstream and BLI evaluation scores is probably the importance of monolingual embedding quality: while two embedding matrices can be trained almost perfectly isomorphically on a relatively small parallel corpus, the monolingual performance of these embeddings probably lags behind embeddings trained on a big corpus, Wikipedia for example. But at the same time, this also shows that embeddings can be mapped very well even if they are not of the highest quality, but their corpora are similar enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Our codes, mapping dictionaries, and more mapping results are available at https://github.com/xerevity/mappability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As available at: https://linguatools.org/tools/corpora/wikipediamonolingual-corpora/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "R\u00e9ka Cserh\u00e1ti was supported by the \u00daNKP-21-1 -New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund.The research presented in this paper was partly supported by the Ministry of Innovation and the National Research, Development and Innovation Office within the framework of the Artificial Intelligence National Laboratory Programme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual embeddings: When are they worth it?", "authors": [ { "first": "Simran", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Avner", "middle": [], "last": "May", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "R\u00e9", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2650--2663", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.236" ] }, "num": null, "urls": [], "raw_text": "Simran Arora, Avner May, Jian Zhang, and Christopher R\u00e9. 2020. Contextual embeddings: When are they worth it? In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2650-2663, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": { "DOI": [ "10.18653/v1/D16-1250" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294, Austin, Texas. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5012--5019", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence, pages 5012-5019.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "789--798", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 789-798.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "2020. word2word: A collection of bilingual lexicons for 3,564 language pairs", "authors": [ { "first": "Yo Joong", "middle": [], "last": "Choe", "suffix": "" }, { "first": "Kyubyong", "middle": [], "last": "Park", "suffix": "" }, { "first": "Dongwoo", "middle": [], "last": "Kim", "suffix": "" } ], "year": null, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "3036--3045", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yo Joong Choe, Kyubyong Park, and Dongwoo Kim. 2020. word2word: A collection of bilingual lexi- cons for 3,564 language pairs. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 3036-3045, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Herv'e J'egou", "suffix": "" } ], "year": 2018, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv'e J'egou. 2018. Word translation without parallel data. ArXiv, abs/1710.04087.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning", "authors": [ { "first": "Yerai", "middle": [], "last": "Doval", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa Anke", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Schockaert", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4013--4023", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yerai Doval, Jose Camacho-Collados, Luis Es- pinosa Anke, and Steven Schockaert. 2020. On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 4013-4023, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The secret is in the spectra: Predicting cross-lingual task performance with spectral similarity measures", "authors": [ { "first": "Haim", "middle": [], "last": "Dubossarsky", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "2377--2390", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.186" ] }, "num": null, "urls": [], "raw_text": "Haim Dubossarsky, Ivan Vuli\u0107, Roi Reichart, and Anna Korhonen. 2020. The secret is in the spectra: Pre- dicting cross-lingual task performance with spectral similarity measures. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 2377-2390, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Litschko", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "710--721", "other_ids": { "DOI": [ "10.18653/v1/P19-1070" ] }, "num": null, "urls": [], "raw_text": "Goran Glava\u0161, Robert Litschko, Sebastian Ruder, and Ivan Vuli\u0107. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 710-721, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Non-adversarial unsupervised word translation", "authors": [ { "first": "Yedid", "middle": [], "last": "Hoshen", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "469--478", "other_ids": { "DOI": [ "10.18653/v1/D18-1043" ] }, "num": null, "urls": [], "raw_text": "Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 469-478, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Loss in translation: Learning bilingual word mapping with a retrieval criterion", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2979--2984", "other_ids": { "DOI": [ "10.18653/v1/D18-1330" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv\u00e9 J\u00e9gou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979-2984, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lost in evaluation: Misleading benchmarks for bilingual dictionary induction", "authors": [ { "first": "Yova", "middle": [], "last": "Kementchedjhieva", "suffix": "" }, { "first": "Mareike", "middle": [], "last": "Hartmann", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3336--3341", "other_ids": { "DOI": [ "10.18653/v1/D19-1328" ] }, "num": null, "urls": [], "raw_text": "Yova Kementchedjhieva, Mareike Hartmann, and An- ders S\u00f8gaard. 2019. Lost in evaluation: Misleading benchmarks for bilingual dictionary induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3336- 3341, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Choosing transfer languages for cross-lingual learning", "authors": [ { "first": "Yu-Hsiang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Chian-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zirui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yuyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Rijhwani", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhisong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3125--3135", "other_ids": { "DOI": [ "10.18653/v1/P19-1301" ] }, "num": null, "urls": [], "raw_text": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Evaluating multilingual text encoders for unsupervised cross-lingual retrieval", "authors": [ { "first": "Robert", "middle": [], "last": "Litschko", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glavas", "suffix": "" } ], "year": 2021, "venue": "Advances in Information Retrieval -43rd", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-030-72113-8_23" ] }, "num": null, "urls": [], "raw_text": "Robert Litschko, Ivan Vulic, Simone Paolo Ponzetto, and Goran Glavas. 2021. Evaluating multilin- gual text encoders for unsupervised cross-lingual re- trieval. In Advances in Information Retrieval -43rd", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Proceedings, Part I", "authors": [], "year": 2021, "venue": "European Conference on IR Research, ECIR 2021", "volume": "12656", "issue": "", "pages": "342--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Conference on IR Research, ECIR 2021, Virtual Event, March 28 -April 1, 2021, Proceed- ings, Part I, volume 12656 of Lecture Notes in Com- puter Science, pages 342-358. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Neural and Information Processing System (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Neural and Information Processing System (NIPS).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Analyzing the limitations of cross-lingual word embedding mappings", "authors": [ { "first": "Aitor", "middle": [], "last": "Ormazabal", "suffix": "" }, { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1492" ] }, "num": null, "urls": [], "raw_text": "Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the lim- itations of cross-lingual word embedding mappings.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4990--4995", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4990-4995, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces", "authors": [ { "first": "Barun", "middle": [], "last": "Patra", "suffix": "" }, { "first": "Joel", "middle": [ "Ruben" ], "last": "", "suffix": "" }, { "first": "Antony", "middle": [], "last": "Moniz", "suffix": "" }, { "first": "Sarthak", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Matthew", "middle": [ "R" ], "last": "Gormley", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "184--193", "other_ids": { "DOI": [ "10.18653/v1/P19-1018" ] }, "num": null, "urls": [], "raw_text": "Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 184-193, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A survey of cross-lingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "", "pages": "569--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Wiki-Matrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia", "authors": [ { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", "volume": "", "issue": "", "pages": "1351--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2021. Wiki- Matrix: Mining 135M parallel sentences in 1620 language pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351-1361, Online. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On the limitations of unsupervised bilingual dictionary induction", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "778--788", "other_ids": { "DOI": [ "10.18653/v1/P18-1072" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778- 788, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Energy and policy considerations for deep learning in nlp", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Ananya", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mc-Callum", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02243" ] }, "num": null, "urls": [], "raw_text": "Emma Strubell, Ananya Ganesh, and Andrew Mc- Callum. 2019. Energy and policy considera- tions for deep learning in nlp. arXiv preprint arXiv:1906.02243.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Do we really need fully unsupervised cross-lingual embeddings?", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4407--4418", "other_ids": { "DOI": [ "10.18653/v1/D19-1449" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Goran Glava\u0161, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsuper- vised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4407-4418, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Are all good word vector spaces isomorphic?", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "3178--3192", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.257" ] }, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Sebastian Ruder, and Anders S\u00f8gaard. 2020. Are all good word vector spaces isomorphic? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3178-3192, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Normalized word embedding and orthogonal transform for bilingual word translation", "authors": [ { "first": "Chao", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": { "DOI": [ "10.3115/v1/N15-1104" ] }, "num": null, "urls": [], "raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011, Denver, Colorado. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Distributions of BLI scores of embedding mappings using different methods and corpus types.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "(a) Score distributions of languages involved in mapping embeddings trained on loosely-comparable full Wikipedia. (b) Mapping scores of languages, with embeddings trained on mildly-comparable Wiki corpus. (c) Mapping scores of languages, with embeddings trained on a strictly-comparable (parallel) corpus.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Score distributions of embedding mappings involving a language (either as source or as target language).", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Average P@1 scores of loosely-comparable Wikipedia embedding mappings in all examined source-target pairs of languages.", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "(a) Connection of mapping score to the proportion of overlapping tokens (b) Connection of mapping score to word distribution distance (c) Connection of mapping score to corpus size (on a logarithmic scale) Relationship between performance of bilingual mappings and corpus properties.", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "P@1 scores of RCSLS mappings of embeddings from Wikipedia parts of various overlap ratio and size.", "type_str": "figure" }, "TABREF2": { "content": "
: Pearson correlation coefficients between P@1
scores and isomorphism.
L-Wiki M-Wiki S-Wiki
#627227
Laplacian-0.851-0.681-0.926
SVG-0.346-0.518-0.896
Spectral-0.3830.170-0.548
Effective Spectral -0.1740.1180.265
Relational0.8950.8790.921
Neighbors0.8390.7040.898
", "type_str": "table", "html": null, "text": "", "num": null }, "TABREF5": { "content": "", "type_str": "table", "html": null, "text": "R2 scores of the predictions of isomorphism, based on our three corpus properties combined, using random forest regression.", "num": null } } } }