{ "paper_id": "N19-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:03:44.302949Z" }, "title": "Aligning Vector-spaces with Noisy Supervised Lexicons", "authors": [ { "first": "Noa", "middle": [ "Yehezkel" ], "last": "Lubin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": { "settlement": "Ramat Gan", "country": "Israel" } }, "email": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": { "settlement": "Ramat Gan", "country": "Israel" } }, "email": "jacob.goldberger@biu.ac.il" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bar Ilan University", "location": { "settlement": "Ramat Gan", "country": "Israel" } }, "email": "yoav.goldberg@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP. Current solutions assume that the lexicon which defines the alignment pairs is noise-free. We consider the case where the set of aligned points is allowed to contain an amount of noise, in the form of incorrect lexicon pairs and show that this arises in practice by analyzing the edited dictionaries after the cleaning process. We demonstrate that such noise substantially degrades the accuracy of the learned translation when using current methods. We propose a model that accounts for noisy pairs. This is achieved by introducing a generative model with a compatible iterative EM algorithm. The algorithm jointly learns the noise level in the lexicon, finds the set of noisy pairs, and learns the mapping between the spaces. We demonstrate the effectiveness of our proposed algorithm on two alignment problems: bilingual word embedding translation, and mapping between diachronic embedding spaces for recovering the semantic shifts of words across time periods.", "pdf_parse": { "paper_id": "N19-1045", "_pdf_hash": "", "abstract": [ { "text": "The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP. Current solutions assume that the lexicon which defines the alignment pairs is noise-free. We consider the case where the set of aligned points is allowed to contain an amount of noise, in the form of incorrect lexicon pairs and show that this arises in practice by analyzing the edited dictionaries after the cleaning process. We demonstrate that such noise substantially degrades the accuracy of the learned translation when using current methods. We propose a model that accounts for noisy pairs. This is achieved by introducing a generative model with a compatible iterative EM algorithm. The algorithm jointly learns the noise level in the lexicon, finds the set of noisy pairs, and learns the mapping between the spaces. We demonstrate the effectiveness of our proposed algorithm on two alignment problems: bilingual word embedding translation, and mapping between diachronic embedding spaces for recovering the semantic shifts of words across time periods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We consider the problem of mapping between points in different vector spaces. This problem has prominent applications in natural language processing (NLP) . Some examples are creating bilingual word lexicons (Mikolov et al., 2013) , machine translation (Artetxe et al., 2016 (Artetxe et al., , 2017a (Artetxe et al., ,b, 2018a Conneau et al., 2017) , hypernym generation (Yamane et al., 2016) , diachronic embeddings alignment (Hamilton et al., 2016) and domain adaptation (Barnes et al., 2018) . In all these examples one is given word embeddings in two different vector spaces, and needs to learn a mapping from one to the other.", "cite_spans": [ { "start": 149, "end": 154, "text": "(NLP)", "ref_id": null }, { "start": 208, "end": 230, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF12" }, { "start": 253, "end": 274, "text": "(Artetxe et al., 2016", "ref_id": "BIBREF1" }, { "start": 275, "end": 299, "text": "(Artetxe et al., , 2017a", "ref_id": "BIBREF2" }, { "start": 300, "end": 326, "text": "(Artetxe et al., ,b, 2018a", "ref_id": null }, { "start": 327, "end": 348, "text": "Conneau et al., 2017)", "ref_id": "BIBREF7" }, { "start": 371, "end": 392, "text": "(Yamane et al., 2016)", "ref_id": "BIBREF20" }, { "start": 427, "end": 450, "text": "(Hamilton et al., 2016)", "ref_id": "BIBREF10" }, { "start": 473, "end": 494, "text": "(Barnes et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem is traditionally posed as a supervised learning problem, in which we are given two sets of vectors (e.g.: word-vectors in Italian and in English) and a lexicon mapping the points between the two sets (known word-translation pairs). Our goal is to learn a mapping that will correctly map the vectors in one space (e.g.: English word embeddings) to their known corresponding vectors in the other (e.g.: Italian word embeddings). The mapping will then be used to translate vectors for which the correspondence is unknown. This setup was popularized by Mikolov et al. (2013) .", "cite_spans": [ { "start": 561, "end": 582, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The supervised setup assumes a perfect lexicon. Here, we consider what happens in the presence of training noise, where some of the lexicon's entries are incorrect in the sense that they don't reflect an optimal correspondence between the word vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are given two datasets, X = x 1 , ..., x m and Y = y 1 , ..., y n , coming from d-dimensional spaces X and Y. We assume that the spaces are related, in the sense that there is a function f (x) mapping points in space X to points in space Y. In this work, we focus on linear mappings, i.e. a d \u00d7 d matrix Q mapping points via y i = Qx i . The goal of the learning is to find the translation matrix Q. In the supervised setting, m = n and we assume that \u2200i f (x i ) \u2248 y i . We refer to the sets X and Y as the supervision. The goal is to learn a matrixQ such the Frobenius norm is minimized:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Supervised Translation Problem", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q = arg min Q QX \u2212 Y 2 F .", "eq_num": "(1)" } ], "section": "The Supervised Translation Problem", "sec_num": "2.1" }, { "text": "Gradient-based The objective in (1) is convex, and can be solved via least-squares method or via stochastic gradient optimization iterating over the pairs (x i , y i ), as done by Mikolov et al. (2013) and Dinu and Baroni (2014) .", "cite_spans": [ { "start": 180, "end": 201, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF12" }, { "start": 206, "end": 228, "text": "Dinu and Baroni (2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "Orthogonal Procrustes (OP) Artetxe et al. (2016) and Smith et al. (2017) argued and proved that a linear mapping between sub-spaces must be orthogonal. This leads to the modified objective:", "cite_spans": [ { "start": 27, "end": 48, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF1" }, { "start": 53, "end": 72, "text": "Smith et al. (2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q = arg min Q,s.t:Q T Q=I QX \u2212 Y 2 F", "eq_num": "(2)" } ], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "Objective (2) is known as the Orthogonal Procrustes Problem. It can be solved algebraically by using a singular value decomposition (SVD). Schnemann (1966) proved that the solution to 2 is:", "cite_spans": [ { "start": 139, "end": 155, "text": "Schnemann (1966)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "Q = U V T s.t. U \u03a3V T is the SVD of Y X T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "The OP method is used in Xing et al. (2015) ; Artetxe et al. (2016 Artetxe et al. ( , 2017a Artetxe et al. ( ,b, 2018a ; Hamilton et al. 2016; Conneau et al. (2017) ; Ruder et al. (2018) .", "cite_spans": [ { "start": 25, "end": 43, "text": "Xing et al. (2015)", "ref_id": "BIBREF18" }, { "start": 46, "end": 66, "text": "Artetxe et al. (2016", "ref_id": "BIBREF1" }, { "start": 67, "end": 91, "text": "Artetxe et al. ( , 2017a", "ref_id": "BIBREF2" }, { "start": 92, "end": 118, "text": "Artetxe et al. ( ,b, 2018a", "ref_id": null }, { "start": 143, "end": 164, "text": "Conneau et al. (2017)", "ref_id": "BIBREF7" }, { "start": 167, "end": 186, "text": "Ruder et al. (2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Existing Solution Methods", "sec_num": "2.2" }, { "text": "The supervised alignment problem can be expended to the semi-supervised (Artetxe et al., 2017b; Ruder et al., 2018) or unsupervised (Zhang et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018) case, where a very small lexicon or none at all is given. In iterative methods, the lexicon is expended and used to learn the alignment, later the alignment is used to predict the lexicon for the next iteration and so on. In adversarial methods, a final iterative step is used after the lexicon is built to refine the result. We will focus on the supervised stage in the unsupervised setting, meaning estimating the alignment once a lexicon is induced.", "cite_spans": [ { "start": 72, "end": 95, "text": "(Artetxe et al., 2017b;", "ref_id": "BIBREF5" }, { "start": 96, "end": 115, "text": "Ruder et al., 2018)", "ref_id": "BIBREF14" }, { "start": 132, "end": 152, "text": "(Zhang et al., 2017;", "ref_id": "BIBREF21" }, { "start": 153, "end": 174, "text": "Conneau et al., 2017;", "ref_id": "BIBREF7" }, { "start": 175, "end": 197, "text": "Artetxe et al., 2018b;", "ref_id": "BIBREF4" }, { "start": 198, "end": 214, "text": "Xu et al., 2018;", "ref_id": "BIBREF19" }, { "start": 215, "end": 215, "text": "", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Unsupervised Translation Problem", "sec_num": "2.3" }, { "text": "The previous methods assume the supervision set X, Y is perfectly correct. However, this is often not the case in practice. We consider the case where a percentage p of the pairs in the supervision set are \"noisy\": applying the gold transformation to a noisy point x j will not result in a vector close to y j . The importance of the quality of word-pairs selection was previously analyzed by Vuli\u0107 and Korhonen (2016) . Here, we equate \"bad pairs\" to noise, and explore the performance in the presence of noise by conducting a series of synthetic experiments. We take a set of points X, a random transformation Q and a gold set Y = QX. We define error as Y \u2212\u0176 2 F where\u0176 =QX is the prediction according to the learned transform Q. Following the claim that linear transformations between word vector spaces are orthogonal, we focus here on orthogonal transformations.", "cite_spans": [ { "start": 393, "end": 418, "text": "Vuli\u0107 and Korhonen (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Effect of Noise", "sec_num": "3" }, { "text": "We begin by inspecting a case of few 2-dimensional points, which can be easily visualized. We compare a noise-free training to the case of a single noisy point. We construct X by sampling n = 10 points of dimension d = 2 from a normal distribution. We take nine points and transformed them via an orthogonal random transform Q. We then add a single noisy pair which is generated by sampling two normally distributed random points and treating them as a pair. The error is measured only on the nine aligned pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low Dimensional Synthetic Data", "sec_num": null }, { "text": "When no noise is applied, both Gradient-based and Procrustes methods are aligned with 0 error mean and variance. Once the noisy condition is applied this is no longer the case. Figure 1(A) shows the noisy condition. Here, the red point (true) and box (prediction) represent the noisy point. Green dots are the true locations after transformation, and the blue boxes are the predicted ones after transformation. Both methods are affected by the noisy sample: all ten points fall away from their true location. The effect is especially severe for the gradient-based methods.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 188, "text": "Figure 1(A)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Low Dimensional Synthetic Data", "sec_num": null }, { "text": "High Dimensional Embeddings The experiment setup is as before, but instead of a normal distribution we use (6B, 300d) English Glove Embeddings (Pennington et al., 2014) with lexicon of size n = 5000. We report the mean error for various noise levels on an unseen aligned test set of size 1500.", "cite_spans": [ { "start": 143, "end": 168, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Low Dimensional Synthetic Data", "sec_num": null }, { "text": "In Figure 1(B) we can see that both methods are effected by noise. As expected, as the amount of noise increases the error on the test set increases. We can again see that the effect is worse with gradient-based methods.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 14, "text": "Figure 1(B)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Low Dimensional Synthetic Data", "sec_num": null }, { "text": "Having verified that noise in the supervision severely influences the solution of both methods, we turn to proposing a noise-aware model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-aware Model", "sec_num": "4" }, { "text": "The proposed model jointly identifies noisy pairs in the supervision set and learns a translation which ignores the noisy points. Identifying the point helps to clean the underlying lexicon (dictionary) that created the supervision. In addition, by removing those points our model learns a better translation matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-aware Model", "sec_num": "4" }, { "text": "We are given x \u2208 R d and we sample a corresponding y \u2208 R d by first sampling a Bernoulli random variable with probability \u03b1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "z \u223c Bernoulli(\u03b1) y \u223c N (\u00b5 y , \u03c3 2 y I) z = 0 ('noise') N (Qx, \u03c3 2 I) z = 1 ('aligned')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "The density function y is a mixture of two Gaussians:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "f (y|x) = (1\u2212\u03b1)N (\u00b5 y , \u03c3 y 2 I) + \u03b1N (Qx, \u03c3 2 I).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "The likelihood function is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "L(Q, \u03c3, \u00b5 y , \u03c3 y ) = t log f (y t |x t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "EM Algorithm We apply the EM algorithm (Dempster et al., 1977) to maximize the objective in the presence of latent variables. The algorithm has both soft and hard decision variants. We used the hard decision one which we find more natural, and note that the posterior probability of z t was close to 0 or 1 also in the soft-decision case.", "cite_spans": [ { "start": 39, "end": 62, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "It is important to properly initialize the EM algorithm to avoid convergence to a local optima. We initialize Q by applying OP on the entire lexicon (not just the clean pairs). We initialize the variance, \u03c3, by calculating \u03c3 2 = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "n\u2022d t=1 Qx t \u2212 y t 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "We initialize, \u00b5 y , \u03c3 y by taking the mean and variance of the entire dataset. Finally, we initialize \u03b1 to 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "The (hard version) EM algorithm is shown in Algorithm box 1. The runtime of each iteration is dominated by the OP algorithm (matrix multiplication and SVD on a d \u00d7 d matrix). Each iteration contains an additional matrix multiplication and few simple vector operations. Figure 1(B) shows it obtains perfect results on the simulated noisy data.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 280, "text": "Figure 1(B)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "Algorithm 1 Noise-aware Alignment Data: List of paired vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "(x 1 , y 1 ), ..., (x n , y n ) Result: Q, \u03c3, \u00b5 y , \u03c3 y while |\u03b1 curr \u2212 \u03b1 prev | > do E step: w t = p(z t = 1|x t , y t ) = \u03b1N (Qxt,\u03c3 2 I) f (yt|xt) h t = 1(w t > 0.5) n 1 = t h t M step:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "Apply OP on the subset {t|h t = 1} to find Q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "\u03c3 2 = 1 d\u2022n 1 t|ht=1 Qx t \u2212 y t 2 \u00b5 y = 1 (n\u2212n 1 ) t|ht=0 y t \u03c3 2 y = 1 d(n\u2212n 1 ) t|ht=0 \u00b5 y \u2212 y t 2 \u03b1 prev = \u03b1 curr \u03b1 curr = n 1 n end 5 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Model", "sec_num": null }, { "text": "Experiment Setup This experiment tests the noise-aware solution on an unsupervised translation problem. The goal is to learn the \"translation matrix\", which is a transformation matrix between two languages by building a dictionary. We can treat the unsupervised setup after retrieving a lexicon as an iterative supervised setup where some of the lexicon pairs are noisy. We assumes the unsupervised setting will contain higher amount of noise than the supervised one, especially in the first iterations. We follow the experiment setup in Artetxe et al. (2018b) . But instead of using OP for learning the translation matrix, we used our Noise-Aware Alignment (NAA), meaning we jointly learn to align and to ignore the noisy pairs. We used the En-It dataset provided by Dinu and Baroni (2014) and the extensions: En-De, En-Fi and En-Es of Artetxe et al. (2018a Artetxe et al. ( , 2017b . Table 1 : Bilingual Experiment P@1. Numbers are based on 10 runs of each method. The En\u2192De, En\u2192Fi and En\u2192Es improvements are significant at p < 0.05 according to ANOVA on the different runs.", "cite_spans": [ { "start": 538, "end": 560, "text": "Artetxe et al. (2018b)", "ref_id": "BIBREF4" }, { "start": 768, "end": 790, "text": "Dinu and Baroni (2014)", "ref_id": "BIBREF9" }, { "start": 837, "end": 858, "text": "Artetxe et al. (2018a", "ref_id": "BIBREF3" }, { "start": 859, "end": 883, "text": "Artetxe et al. ( , 2017b", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 886, "end": 893, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Bilingual Word Embedding", "sec_num": "5.1" }, { "text": "In Table 1 we report the best and average precision@1 scores and the average number of iterations among 10 experiments, for different language translations. Our model improves the results in the translation tasks. In most setups our average case is better than the former best case. In addition, the noise-aware model is more stable and therefore requires fewer iterations to converge. The accuracy improvements are small but consistent, and we note that we consider them as a lower-bound on the actual improvements as the current test set comes from the same distribution of the training set, and also contains similarly noisy pairs. Using the soft-EM version results in similar results, but takes roughly 15% more iterations to converge. Table 2 lists examples of pairs that were kept and discarded in En-It dictionary. The algorithm learned the pair (dog \u2192 dog) is an error. Another example is the translation (good \u2192 santo) which is a less-popular word-sense than (good \u2192 buon / buona). When analyzing the En-It cleaned dictionary we see the percentage of potentially misleading pairs (same string, numbers and special characters) is reduced from 12.1% to 4.6%. ", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": null }, { "start": 740, "end": 747, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment Results", "sec_num": null }, { "text": "Experiment Setup The goal is to align English word-embedding derived from texts from differ-ent time periods, in order to identify which words changed meaning over time. The assumption is that most words remained stable, and hence the supervision is derived by aligning each word to itself. This problem contains noise in the lexicon by definition. We follow the exact setup fully described in Hamilton et al. (2016) , but replace the OP algorithm with our Noise-aware version 1 . We project 1900s embeddings to 1990s embeddings vector-space. The top 10 distant word embeddings after alignment are analyzed by linguistic experts for semantic shift.", "cite_spans": [ { "start": 381, "end": 416, "text": "described in Hamilton et al. (2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Diachronic (Historical) Word Embedding", "sec_num": "5.2" }, { "text": "Experiment Results 45.5% of the input pairs were identified as noise. After the post processing of removing the non-frequent words as described in the experiment setup we end up with 121 noisy words. Our algorithm successfully identifies all the top-changing words in Hamilton et al. (2016) as noise, and learns to ignore them in the alignment. In addition, we argue our method provides better alignment. Table 3 shows the Nearest Neighbor (NN) of a 1990s word, in the 1900s vector-space after projection. We look at the top 10 changed words in Hamilton et al. (2016) and 3 unchanged words. We compare the alignment of the OP projection to the Noise-aware Alignment (NAA). For example, with our solution the word actually whose meaning shifted from \"in fact\" to express emphasize or surprise, is correctly mapped to really instead of believed. The word gay shifted from cheerful to homosexual, yet is still mapped to gay with NAA. This happens because the related embeddings (homosexual, lesbian and so on) are empty embeddings in 1900s, leaving gay as the next-best candidate, which we argue is better than OP's society. The words car, driver, eve whose meaning didn't change, were incorrectly aligned with OP to cab, stepped, anniversary instead of to themselves. ", "cite_spans": [], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Diachronic (Historical) Word Embedding", "sec_num": "5.2" }, { "text": "We introduced the problem of embedding space projection with noisy lexicons, and showed that existing projection methods are sensitive in the presence of noise. We proposed an EM algorithm that jointly learns the projection and identifies the noisy pairs. The algorithm can be used as a drop-in replacement for the OP algorithm, and was demonstrated to improve results on two NLP tasks. We provide code at https://github.com/NoaKel/Noise-Aware-Alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Pre-possessing: removing proper nouns, stop words and empty embeddings. Post-processing: removing words whose frequency is below 10 \u22125 in either years.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work was supported by The Israeli Science Foundation (grant number 1555/15), and by the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation program. We also, thank Roee Aharoni for helpful discussions and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Gromov-wasserstein alignment of word embedding spaces", "authors": [ { "first": "David", "middle": [], "last": "Alvarez-Melis", "suffix": "" }, { "first": "Tommi", "middle": [ "S" ], "last": "Jaakkola", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.00013" ] }, "num": null, "urls": [], "raw_text": "David Alvarez-Melis and Tommi S Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. arXiv preprint arXiv:1809.00013.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning bilingual word embeddings with (almost) no bilingual data", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017a. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence (AAAI-18).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.06297" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully un- supervised cross-lingual mappings of word embed- dings. arXiv preprint arXiv:1805.06297.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised neural machine translation", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.11041" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017b. Unsupervised neural ma- chine translation. arXiv preprint arXiv:1710.11041.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Projecting embeddings for domain adaption: Joint modeling of sentiment analysis in diverse domains", "authors": [ { "first": "Jeremy", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.04381" ] }, "num": null, "urls": [], "raw_text": "Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Projecting embeddings for domain adaption: Joint modeling of sentiment analysis in di- verse domains. arXiv preprint arXiv:1806.04381.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.04087" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Maximum likelihood from incomplete data via the em algorithm", "authors": [ { "first": "P", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "Nan", "middle": [ "M" ], "last": "Dempster", "suffix": "" }, { "first": "Donald B", "middle": [], "last": "Laird", "suffix": "" }, { "first": "", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the royal statistical society. Series B (methodological)", "volume": "", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages 1-38.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness prob- lem. CoRR, abs/1412.6568.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Diachronic word embeddings reveal statistical laws of semantic change", "authors": [ { "first": "Jure", "middle": [], "last": "William L Hamilton", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.09096" ] }, "num": null, "urls": [], "raw_text": "William L Hamilton, Jure Leskovec, and Dan Juraf- sky. 2016. Diachronic word embeddings reveal sta- tistical laws of semantic change. arXiv preprint arXiv:1605.09096.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised machine translation using monolingual corpora only", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.00043" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1309.4168" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A discriminative latent-variable model for bilingual lexicon induction", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.09334" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ryan Cotterell, Yova Kementched- jhieva, and Anders S\u00f8gaard. 2018. A discrimina- tive latent-variable model for bilingual lexicon in- duction. arXiv preprint arXiv:1808.09334.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A generalized solution of the orthogonal procrustes problem", "authors": [ { "first": "Peter", "middle": [], "last": "Schnemann", "suffix": "" } ], "year": 1966, "venue": "Psychometrika", "volume": "31", "issue": "1", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Schnemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. CoRR, abs/1702.03859.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On the role of seed lexicons in learning bilingual word embeddings", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "247--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107 and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embed- dings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 247-257.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Normalized word embedding and orthogonal transform for bilingual word translation", "authors": [ { "first": "Chao", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised cross-lingual transfer of word embedding spaces", "authors": [ { "first": "Ruochen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Otani", "suffix": "" }, { "first": "Yuexin", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.03633" ] }, "num": null, "urls": [], "raw_text": "Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual trans- fer of word embedding spaces. arXiv preprint arXiv:1809.03633.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distributional hypernym generation by jointly learning clusters and projections", "authors": [ { "first": "Josuke", "middle": [], "last": "Yamane", "suffix": "" }, { "first": "Tomoya", "middle": [], "last": "Takatani", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1871--1879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josuke Yamane, Tomoya Takatani, Hitoshi Yamada, Makoto Miwa, and Yutaka Sasaki. 2016. Distribu- tional hypernym generation by jointly learning clus- ters and projections. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1871- 1879.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adversarial training for unsupervised bilingual lexicon induction", "authors": [ { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1959--1970", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1959-1970.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Noise influence. (A): the effect of a noisy pair on 2D alignment. (B) mean error over non-noisy pairs as a function of noise level.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "content": "
MethodEn\u2192ItEn\u2192DeEn\u2192FiEn\u2192Es
bestavgiters bestavgiters bestavgiters bestavgiters
", "html": null, "text": "Artetxe et al., 2018b 48.53 48.13 573 48.47 48.19 773 33.50 32.63 988 37.60 37.33 808 Noise-aware Alignment 48.53 48.20 471 49.67 48.89 568 33.98 33.68 502 38.40 37.79 551", "type_str": "table", "num": null }, "TABREF2": { "content": "", "html": null, "text": "A sample of decisions from the noise-aware alignment on the English \u2192 Italian dataset.", "type_str": "table", "num": null }, "TABREF4": { "content": "
", "html": null, "text": "Diachronic Semantic Change Experiment. Upper-part: noisy pairs. Bold: real semantic shifts. Underlined: global genre/discourse shifts. Unmarked: corpus artifacts. Bottom-part: clean pairs: Italics: unchanged words, no semantic shift.", "type_str": "table", "num": null } } } }