{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:13.734300Z" }, "title": "Locality Preserving Loss: Neighbors that Live together, Align together", "authors": [ { "first": "Ashwinkumar", "middle": [], "last": "Ganesan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University Of Maryland Baltimore County (UMBC)", "location": { "postCode": "21250", "region": "MD", "country": "USA" } }, "email": "" }, { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "", "affiliation": { "laboratory": "", "institution": "University Of Maryland Baltimore County (UMBC)", "location": { "postCode": "21250", "region": "MD", "country": "USA" } }, "email": "ferraro@umbc.edu" }, { "first": "Tim", "middle": [], "last": "Oates", "suffix": "", "affiliation": { "laboratory": "", "institution": "University Of Maryland Baltimore County (UMBC)", "location": { "postCode": "21250", "region": "MD", "country": "USA" } }, "email": "oates@cs.umbc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a locality preserving loss (LPL) that improves the alignment between vector space embeddings while separating uncorrelated representations. Given two pretrained embedding manifolds, LPL optimizes a model to project an embedding and maintain its local neighborhood while aligning one manifold to another. This reduces the overall size of the dataset required to align the two in tasks such as crosslingual word alignment. We show that the LPL-based alignment between input vector spaces acts as a regularizer, leading to better and consistent accuracy than the baseline, especially when the size of the training set is small. We demonstrate the effectiveness of LPL-optimized alignment on semantic text similarity (STS), natural language inference (SNLI), multi-genre language inference (MNLI) and cross-lingual word alignment (CLA) showing consistent improvements, finding up to 16% improvement over our baseline in lower resource settings. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present a locality preserving loss (LPL) that improves the alignment between vector space embeddings while separating uncorrelated representations. Given two pretrained embedding manifolds, LPL optimizes a model to project an embedding and maintain its local neighborhood while aligning one manifold to another. This reduces the overall size of the dataset required to align the two in tasks such as crosslingual word alignment. We show that the LPL-based alignment between input vector spaces acts as a regularizer, leading to better and consistent accuracy than the baseline, especially when the size of the training set is small. We demonstrate the effectiveness of LPL-optimized alignment on semantic text similarity (STS), natural language inference (SNLI), multi-genre language inference (MNLI) and cross-lingual word alignment (CLA) showing consistent improvements, finding up to 16% improvement over our baseline in lower resource settings. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the last few years, vector space representations of words and sentences, extracted from encoders trained on a large text corpus, are primary components to model any natural language processing (NLP) task, especially while using neural or deep learning methods. Neural NLP models can be initialized with pretrained word embeddings learned using word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) and fine-tuned sentence encoders like BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019) . They show state-ofthe-art performance in a number of tasks from partof-speech tagging, named entity recognition, and machine translation to measuring textual similarity 1 Code Source: https://github.com/codehacken/localitypreservation (Wang et al., 2018 ). BERT's success has spawned research dedicated to understanding (Rogers et al., 2020) and reducing parameters in transformer architectures (Lan et al., 2019) . Despite these successes, supervised transfer learning is not a panacea. Models based on pretrained word embeddings (for bilingual induction) or BERT-based models require a large parallel corpus to train on. Can we reduce the number of training samples even further?", "cite_spans": [ { "start": 358, "end": 380, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF19" }, { "start": 390, "end": 415, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF21" }, { "start": 459, "end": 480, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" }, { "start": 492, "end": 510, "text": "(Liu et al., 2019)", "ref_id": "BIBREF18" }, { "start": 682, "end": 683, "text": "1", "ref_id": null }, { "start": 748, "end": 766, "text": "(Wang et al., 2018", "ref_id": "BIBREF32" }, { "start": 833, "end": 854, "text": "(Rogers et al., 2020)", "ref_id": null }, { "start": 908, "end": 926, "text": "(Lan et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We approach this problem by proposing a framework that exploits the inherent relationship between word or sentence representations in their pretrained manifolds. These relationships help train the model with fewer samples, since each training sample represents a group of instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We consider three types of tasks: (1) a regression task such as semantic text similarity; (2) a classification task (e.g., natural language inference (NLI)); and (3) vector space alignment, where the purpose is to learn a mapping between two independently trained embeddings (e.g., crosslingual word alignment) Learning bilingual word embedding models alleviates low resource problems by aligning embeddings from a source language that is rich in available text to a target language with a small corpus with limited vocabulary. 2 Largely, recent work focuses on learning a linear mapping to align two embedding spaces by minimizing the mean squared error (MSE) between embeddings of words projected from the source domain and their counterparts in the target domain (Mikolov et al., 2013; Ruder et al., 2017) . Minimizing MSE is useful when a large set of translated words (between source and target languages) is provided, but the mapping overfits when the parallel corpus is small or may require non-linear transformations and M2. f is the manifold alignment function while N 1 i (A) and N 1 i (B) are their respective neighbors in manifold M1. N 2 i (A) and N 2 i (B) are their neighbors in manifold M2. Figure (a) shows the alignment when trained with a MSE loss. The neighbors are distributed across the manifold due to overfitting. (b) shows alignment with a locality preserving loss (LPL) that reconstructs the original manifold in the target domain M2 maintaining its local structure. (S\u00f8gaard et al., 2018) . In order to reduce overfitting and improve word alignment, we propose an auxiliary loss function called locality preserving loss (LPL) that trains the model to align two sets of word embeddings while maintaining the local neighborhood structure around words in the source domain.", "cite_spans": [ { "start": 528, "end": 529, "text": "2", "ref_id": null }, { "start": 766, "end": 788, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF19" }, { "start": 789, "end": 808, "text": "Ruder et al., 2017)", "ref_id": "BIBREF25" }, { "start": 1493, "end": 1515, "text": "(S\u00f8gaard et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 1207, "end": 1217, "text": "Figure (a)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With classification and regression tasks where there are two inputs (e.g., NLI and STS-B), we show how the alignment between the two input subspace acts as regularizer, improving the model's accuracy on the task with MSE alone and when MSE and LPL are combined together. LPL achieves this by augmenting existing text \u2194 label pairs with pseudo-pairs constructed from their neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, our main contributions are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a loss function called locality preserving loss (LPL) to improve vector space alignment and show how the alignment acts as a regularizer while performing language inference and semantic text similarity. LPL improves correlation or accuracy of linear and non-linear mapping (deep networks) while exploiting the inherent geometries of existing pretrained embedding manifolds to optimize an alignment model (Table 1a, Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 436, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show LPL is flexible and can be optimized with SGD. Hence, it can be applied to both deep neural networks and linear transformations. In contrast, previous cross-lingual word alignment models that are a linear map between source and target language, are learned using singular value decomposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show an increase in correlation on semantic text similarity (STS-B) and accuracy on SNLI in comparison with the baseline when the models are trained with small datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Training with LPL shows up to 16.38% (90.1% relative), on SNLI up to 8.9% (19.3% relative) improvement when trained with just 1000 samples (0.002% of the dataset). We train a crosslingual word alignment model giving up to 4.1% (13.8% relative) improvement in comparison to a MSE optimized mapping while reducing the size of the parallel corpus required to train the mapping by 40% (3K out of 5K pairs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Background & Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Manifold learning methods represent these high dimensional datapoints in a lower dimensional space by extracting important features from the data, making it easier to cluster and search for similar data points. The methods are broadly categorized into linear, such as Principal Component Analysis (PCA), and non-linear algorithms. Nonlinear methods include multi-dimensional scaling (Cox and Cox, 2000, MDS) , locally linear embedding (Roweis and Saul, 2000, LLE) and Laplacian eigenmaps (Belkin and Niyogi, 2002, LE) . He and Niyogi (2004) compute the Euclidean distance between points to construct an adjacency graph and create a linear map that preserves the neighborhood structure of each point in the manifold. Another popular tool to learn manifolds is an autoencoder where a self-reconstruction loss is used to train a neural network (Rumelhart et al., 1985) . Vincent et al. (2008) design an autoencoder that is robust to noise by training it with a noisy input and then reconstructing the original noise-free input.", "cite_spans": [ { "start": 383, "end": 407, "text": "(Cox and Cox, 2000, MDS)", "ref_id": null }, { "start": 435, "end": 463, "text": "(Roweis and Saul, 2000, LLE)", "ref_id": null }, { "start": 488, "end": 517, "text": "(Belkin and Niyogi, 2002, LE)", "ref_id": null }, { "start": 520, "end": 540, "text": "He and Niyogi (2004)", "ref_id": "BIBREF16" }, { "start": 841, "end": 865, "text": "(Rumelhart et al., 1985)", "ref_id": "BIBREF27" }, { "start": 868, "end": 889, "text": "Vincent et al. (2008)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Dimensionality Reduction", "sec_num": "2.1" }, { "text": "In locally linear embedding (LLE) (Roweis and Saul, 2000), the datapoints are assumed to have a linear relation with their neighbors. To project each point, first a reconstruction loss is utilized to learn the linear relation between a point and their k neighbors. Then, the linear relation is used to learn the embeddings in the reduced dimension. 3 Wang et al. (2014) extend autoencoders by modifying the reconstruction loss to use nearest neighbors of data points, leveraging neighborhood relationships between datapoints from non-linear dimension reduction methods like LLE and Laplacian Eigenmaps.", "cite_spans": [ { "start": 349, "end": 350, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dimensionality Reduction", "sec_num": "2.1" }, { "text": "Benaim and Wolf (2017) utilize a GAN to learn a unidirectional mapping. The total loss applied to train the generator is a combination of different losses, namely, an adversarial loss, a cyclic constraint (inspired by Zhu et al. (2017) ), MSE and an additional distance constraint where the distance between the point and its neighbors in the source domain are maintained in the target domain. Similarly, Conneau et al. (2017) learn to translate words without any parallel data with a GAN that optimizes a cross domain similarity scale to resolve the hubness problem (Dinu et al., 2014) .", "cite_spans": [ { "start": 218, "end": 235, "text": "Zhu et al. (2017)", "ref_id": "BIBREF38" }, { "start": 405, "end": 426, "text": "Conneau et al. (2017)", "ref_id": "BIBREF9" }, { "start": 567, "end": 586, "text": "(Dinu et al., 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Manifold Alignment", "sec_num": "2.2" }, { "text": "These methods are the foundation to learn a mapping between two lower dimensional spaces (manifold alignment, Figure 1 ). Wang et al. (2011) propose a manifold alignment method that preserves the local similarity between points in the manifold being transformed and the correspondence between points that are common to both manifolds. Boucher et al. (2015) replace the manifold alignment algorithm that uses the nearest neighbor graph with a low rank alignment. Cui et al. (2014) align two manifolds without any pairwise data (unsupervised) by assuming the structure of the lower dimension manifolds are similar.", "cite_spans": [ { "start": 122, "end": 140, "text": "Wang et al. (2011)", "ref_id": null }, { "start": 335, "end": 356, "text": "Boucher et al. (2015)", "ref_id": "BIBREF5" }, { "start": 462, "end": 479, "text": "Cui et al. (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 110, "end": 118, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Manifold Alignment", "sec_num": "2.2" }, { "text": "Our work is similar to Bollegala et al. (2017) where the meta-embedding (a common embedding space) for different vector representations is generated using a locally linear embedding (LLE) which preserves the locality. One drawback, though, is that LLE does not learn a singular functional mapping between the source and target vector spaces. A linear mapping between a word and its neighbor is learned for each new word. Hence, the metaembedding must be retrained every time new words are added to the vocabulary. Nakashole (2018) propose NORMA that uses neighborhood sensitive maps where the neighbors are learned rather than extracted from the existing embedding space. Simi-lar to NORMA, LPL uses a modified locally linear representation of each embedding but, unlike it, LPL uses actual nearest neighbors in order to learn an embedding. This is important as NNs may not be present in annotated parallel corpus. Hence, using NNs of annotated pairs in the corpus in LPL expands the size of the training dataset. LPL is optimized with gradient descent and can be easily added to a deep neural network (as seen in \u00a73.5).", "cite_spans": [ { "start": 23, "end": 46, "text": "Bollegala et al. (2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Manifold Alignment", "sec_num": "2.2" }, { "text": "In this section, we describe locality preserving loss, the assumption underlying the loss function and objective functions (eq. (2), eq. (3)) that are optimized. The cumulative loss function while training the model is defined in eq. (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating Locality Preservation into Task-based Learning", "sec_num": "3" }, { "text": "The locality preserving loss (LPL, eq. (2)) is based on an important assumption about the source manifold: for a pre-defined neighborhood of k points (k is chosen manually) in the source embedding space we assume points are \"close\" to a given point such that it can be reconstructed using a linear map of its neighbors. This assumption is similar to that made in locally linear embedding (Roweis and Saul, 2000).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preservation Criteria", "sec_num": "3.1" }, { "text": "As individual embeddings can represent words or sentences, we call each individual embedding a unit. Consider two manifolds-M s \u2208 R n\u00d7d (source domain) and M t \u2208 R m\u00d7d (target domain)that are vector space representations of units within each domain. We do not make assumptions on the methods used to learn each manifold; they may be different. We also do not assume they share a common lexical vocabulary. For example, M s can be created using a standard distributed representation method like word2vec (Mikolov et al., 2013) and consists of English word embeddings while M t is created using GloVe (Pennington et al., 2014) and contains Italian embeddings. Let V s and V t be the respective vocabularies (collection of units) of the two manifolds. .. m t m }. While we do not assume that V t and V s must have common items, we do assume that there is some set of unit pairs that are connected by some consistent relationship. Let V p = {w p 1 ... w p c } be the set of the unit pairs; we consider V p a supervised training set (though it could be weakly supervised, e.g., derived from a parallel corpus). For example, in crosslingual word alignment this consistent relationship is whether one word can be translated as another; in natural language inference, the relationship is whether one sentence entails the other (the second must logically follow from the first). We assume this common set V p is much smaller than the individual vocabularies (c << m and c << n). The mapping (manifold alignment) function is f .", "cite_spans": [ { "start": 503, "end": 525, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF19" }, { "start": 599, "end": 624, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.2" }, { "text": "Hence V s = {w s 1 ... w s n } and V t = {w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.2" }, { "text": "In this paper, we experiment with three types of tasks: cross-lingual word alignment (mapping), natural language inference (classification), and semantic text similarity (regression). In cross-lingual word alignment, V s and V t represent the source and target vocabularies, V p is a bilingual dictionary, and M t and M s are the target and source manifolds. f with \u03b8 f parameters is a linear projection with a single weight matrix W . For NLI, V t and V s are target and source sentences with M t and M s being their manifolds. f is a 2-layer FFN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "3.2" }, { "text": "We use a mapping function f : M s \u2192 M t to align manifold M s to M t . The exact structure of f is task-specific: for example, in our experiments f is a linear function for crosslingual word alignment and it is a 2-layer neural network (non-linear mapping) for NLI. The mapping is optimized using three loss functions: an orthogonal transform (Xing et al., 2015) L ortho (constrain W \u22121 = W T ); mean squared error L mse (eq. 1); and locality preserving loss (LPL) L lpl (eq. 2).", "cite_spans": [ { "start": 343, "end": 362, "text": "(Xing et al., 2015)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "The standard loss function to align two manifolds is mean squared error (MSE) (Ruder et al., 2017; Artetxe et al., 2016) ,", "cite_spans": [ { "start": 78, "end": 98, "text": "(Ruder et al., 2017;", "ref_id": "BIBREF25" }, { "start": 99, "end": 120, "text": "Artetxe et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L mse = i\u2208V p L i mse = i\u2208V p L i mse f (m s i ) \u2212 m t i 2 2 ,", "eq_num": "(1)" } ], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "which minimizes the distance between the unit's representation in M t (the target manifold) and projected vector from M s . The function f (m s i ) has learnable parameters \u03b8 f . MSE can lead to an optimal alignment when there is a large number of units in the parallel corpus to train the mapping between the two manifolds (Ruder et al., 2017) . However, when the parallel corpus V p is small, the mapping is prone to overfitting (Glavas et al., 2019) .", "cite_spans": [ { "start": 324, "end": 344, "text": "(Ruder et al., 2017)", "ref_id": "BIBREF25" }, { "start": 431, "end": 452, "text": "(Glavas et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "Locality preserving loss (LPL: eq. 2) optimizes the mapping f to project a unit together with its neighbors. For a small neighborhood of k units, the source representation of unit w s i is assumed to be a linear combination of its source neighbors. We represent this small neighborhood (of the source embedding m s i of word w s i ) with N k (m s i ), and we compute the local linear reconstruction using W ij , a learned weight associated with each word in the neighborhood of the current word, N k (m s i ). LPL requires that the projected source embedding f (m s i ) be a weighted average of all the projected vectors of its neighbors f (m s j ). Formally, for a particular common item i, LPL at i minimizes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i lpl = m t i \u2212 m s j \u2208N k (m s i ) W ij f (m s j ) 2", "eq_num": "(2)" } ], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "L lpl = m s i ,m t i \u2208V s L i lpl .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "Intuitively, W represents the relation between a word and its neighbors in the source domain. We learn it by minimizing the LLE-inspired loss. For a common i this is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i lle = m s i \u2212 m s j \u2208N k (m s i ) W ij m s j 2", "eq_num": "(3)" } ], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "L lle = m s i \u2208V p L i lle .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "The weights W are subject to the constraint W ij = 1, making the projected embeddings invariant to scaling (Roweis and Saul, 2000) . We can formalize this with an objective L ortho = W W \u2212 I. LPL reduces overfitting because the mapping function f does not simply learn the mapping between unit embeddings in the parallel corpus: it also optimizes for a projection of the unit's neighbors that are not part of the parallel corpus-effectively expanding the size of the training set by the factor k.", "cite_spans": [ { "start": 107, "end": 130, "text": "(Roweis and Saul, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Locality Preserving Loss (LPL)", "sec_num": "3.3" }, { "text": "The total supervised loss becomes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Training with LPL", "sec_num": "3.4" }, { "text": "L sup = L mse (\u03b8 f ) + \u03b2 * L lpl (\u03b8 f , W ) + L lle (W ) + L ortho (\u03b8 f ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Training with LPL", "sec_num": "3.4" }, { "text": "We introduce a constant \u03b2 to allow control over the contribution of LPL to the total loss. Although we minimize total loss (4), shown explicitly with variable dependence, the optimization can be unstable task. The pipeline consists of a 2-layer FFN used to classify sentence pairs. The output layer is of size 3 to classify the input into entailment, contradiction and neutral. It has a size of 1 to generate a continuous value between 0 and 1 for STS-B. The premise / sentence 1 and hypothesis / sentence 2 subspaces are aligned using a MSE and LPL loss that are then added as a concatenated input to train the classifier / regressor. A \u03b4 hyperparameter configured for each label provides the ability to perform alignment for entailment and contradiction while performing divergence for neutral input pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Training with LPL", "sec_num": "3.4" }, { "text": "as there are two sets of independent parameters W and \u03b8 f representing different relationships between datapoints. To reduce the instability, we split the training into two phases. In the first phase, W is learned by minimizing L lle alone and the weights are frozen. Then, L mse and L lpl are minimized while keeping W fixed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Training with LPL", "sec_num": "3.4" }, { "text": "One key difference between our work and Artetxe et al. (2016) is that they optimize the mapping function by taking the singular vector decomposition (SVD) of the squared loss while we use gradient descent to find optimal values of \u03b8 f . As our experimental results show, while both can be empirically advantageous, our work allows LPL to be easily added as just another term in the loss function. With the exception of the alternating optimization of W , our approach does not need special optimization updates to be derived. Euclidean distance between embeddings is used to find NNs.", "cite_spans": [ { "start": 40, "end": 61, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model Training with LPL", "sec_num": "3.4" }, { "text": "MSE and LPL can be used to align two vector spaces: in particular, we show that the objectives can align two subspaces in the same manifold. When combined with cross entropy loss in a classification task, this subspace alignment effectively acts as a regularizer. Figure 2 shows an example architecture where alignment is used as a regularizer for the NLI task. The architecture contains a two layer FFN used to perform language inference, i.e., to predict if the given sentence pairs are entailed, contradictory or neutral. The in-put to the network is a pair of sentence vectors. The initial representations are generated from any sentence/language encoder, e.g., from BERT. The source/sentence1/premise embeddings are first projected to the hypothesis space. The projected vector is then concatenated with the original pair of embeddings and given as input to the network. The alignment losses (MSE and LPL) are computed between the projected premise and original hypothesis embeddings. If the baseline network is optimized with cross entropy (CE) loss to predict label y i , the total loss becomes:", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 272, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Alignment as Regularization", "sec_num": "3.5" }, { "text": "L total = \u03b3 i \u03b4 y i (L i mse + L i lpl + L i lle (W )) + CE y i (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment as Regularization", "sec_num": "3.5" }, { "text": "where \u03b3 is a hyperparameter that controls the impact of the loss (learning rate). Thus, the loss (eq. 5)is an extension of eq. (4) for a classification task but without L ortho , which is not applied as f is a 2-layer FFN (non-linear mapping) and the W W = I constraint for each layer's weights cannot be guaranteed. The alignment loss becomes a vehicle to bias the model based upon our knowledge of the task, forcing a specific behavior on the network. The behavior can be controlled with \u03b4, which can be a positive or negative value specific to each label. A positive \u03b4 optimizes the network to align the embeddings while a negative \u03b4 is a divergence loss. In NLI we assign a constant scalar to all samples with a specific label (i.e., 100 for entailment, 1.0 for contradiction and \u22125.0 for neutral). The scalars were set when optimizing network hyper-parameters. As the optimizer minimizes the loss, a divergence loss tends to \u2212\u221e; in practice, we clip the negative loss value at \u22121.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment as Regularization", "sec_num": "3.5" }, { "text": "We demonstrate the effectiveness of the locality preserving alignment on three types of tasks: semantic text similarity (regression), natural language inference (text classification) and crosslingual word alignment (mapping / regression). In order to compute local neighborhoods, as needed for, e.g., (2), we build a standard KD-Tree and find the nearest neighbors using Euclidean distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results & Analysis", "sec_num": "4" }, { "text": "In semantic text similarity (STS), we use the STS-B dataset (Cer et al., 2017) that is widely utilized as a part of the GLUE benchmark (Wang et al., 2018 (Reimers and Gurevych, 2019) where sentence embeddings are extracted individually using a BERT model (Devlin et al., 2018) . An attached connected feedforward network (FFN) is trained to predict a sentence similarity score between 0 and 1 (optimized with squared error loss). 4 To analyze the impact of various loss functions, the BERT model parameters are frozen so that sentence embeddings remain the same and only the parameters of the FFN are optimized. As described in \u00a73.4, the baseline model is additionally trained with a MSE loss that aligns one sentence manifold to another (m 1 \u2192 m 2 ) and the third model also trains with LPL. \u03b4 is set to mimic the normalized label thus generating the largest loss while aligning two sentences that are same. The contrastive loss separates the two sentence embeddings when they are dissimilar. The maximum margin is set to 0.1.", "cite_spans": [ { "start": 60, "end": 78, "text": "(Cer et al., 2017)", "ref_id": "BIBREF8" }, { "start": 135, "end": 153, "text": "(Wang et al., 2018", "ref_id": "BIBREF32" }, { "start": 154, "end": 182, "text": "(Reimers and Gurevych, 2019)", "ref_id": "BIBREF22" }, { "start": 255, "end": 276, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" }, { "start": 430, "end": 431, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Text Similarity (STS)", "sec_num": "4.1" }, { "text": "In order to study the impact on model performance when the training data is small, we limit the sampled training data size to upto 50% of the original (total dataset size is 5k). 5 Figure 3 shows the Pearson correlation between the baseline model and the same regularizer with MSE and LPL (eq. 4). As observed, the correlation of models trained with LPL is higher for every training dataset size. The relative increase in correlation is in the range of 10.83 to 90.1%. We note here that the correlation cannot be compared with the original BERT model as we do not fine-tune the entire network but only the FFN in order to measure the improvements with the addition of LPL. ", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Semantic Text Similarity (STS)", "sec_num": "4.1" }, { "text": "To test the effectiveness of alignment as a regularizer, a 2-layer FFN is used as shown in Figure 2 ; we measure the change in accuracy with respect to this baseline. An additional single layer network is utilized to perform the alignment with premise and hypothesis spaces. We experiment with the impact of the loss function on two datasets: the Stanford natural language inference (SNLI) (Bowman et al., 2015a) and the multigenre natural language inference dataset (MNLI) (Williams et al., 2018 (b) A map of various transformations that can be performed as described in Artetxe et al. (2018) . We indicate which steps can easily be combined with backpropagation. Figure 6 : Accuracy of alignment regularization on MNLI dataset with a varying number of mismatched out-of-genre samples, up to 5% of the training dataset only (total: 300K samples).", "cite_spans": [ { "start": 474, "end": 496, "text": "(Williams et al., 2018", "ref_id": "BIBREF35" }, { "start": 572, "end": 593, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 665, "end": 673, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Natural Language Inference", "sec_num": "4.2" }, { "text": "training set is reduced. 5 The reduced datasets are created by randomly sampling the required number from the entire dataset. The graphs show that an alignment loss consistently boosts accuracy of the model with respect to the baseline. The difference in accuracy (in Figure 4) is larger initially, it reduces as the training set becomes larger. This is because we calculate the neighbors for each premise from the training dataset only rather than any external text like Wikipedia (i.e., generate embeddings for Wikipedia sentences and then use them as neighbors). As the training size increases LPL has diminishing returns, as the neighbors tend to 5 Model accuracy using MSE and MSE + LPL with 100% of the training data for STS-B is provided in Appendix A.1 and A.2 for SNLI / MNLI. be part of the training pairs themselves.", "cite_spans": [ { "start": 25, "end": 26, "text": "5", "ref_id": null }, { "start": 651, "end": 652, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 268, "end": 277, "text": "Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Natural Language Inference", "sec_num": "4.2" }, { "text": "The cross lingual word alignment dataset is from Dinu et al. (2014) . The dataset is extracted from the Europarl corpus and consists of word pairs split into training (5k pairs) and test (1.5k pairs) respectively. 6 From the 5K word pairs available for training only 3K pairs are used to train the model with LPL and an additional 150 pairs are used as the validation set (in case of Finnish 2.5K pairs are used). This is a reduced set in comparison to the models in Table 1a that are trained with all pairs.'", "cite_spans": [ { "start": 49, "end": 67, "text": "Dinu et al. (2014)", "ref_id": "BIBREF13" }, { "start": 214, "end": 215, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 467, "end": 475, "text": "Table 1a", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Crosslingual Word Alignment", "sec_num": "4.3" }, { "text": "Compared to previous methods that look at explicit mapping of points between the two spaces, LPL tries to maintain the relations between words and their neighbors in the source domain while projecting them into the target domain. Along with the mapping methods in Table 1a , previous methods also apply additional pre/post processing tranforms on the word embeddings as documented in Artetxe et al. (2018) (described in Table 1b ). Crossdomain similarity local scaling (CSLS) (Conneau et al., 2017) is used to retrieve the translated word. Table 1a shows the accuracy of our approach in comparison to other methods.", "cite_spans": [ { "start": 476, "end": 498, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 264, "end": 272, "text": "Table 1a", "ref_id": "TABREF3" }, { "start": 420, "end": 428, "text": "Table 1b", "ref_id": "TABREF3" }, { "start": 540, "end": 548, "text": "Table 1a", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Crosslingual Word Alignment", "sec_num": "4.3" }, { "text": "The accuracy of our proposed approach is better or comparable to previous methods that use similar numbers of transforms. It is similar to Artetxe et al. 2018 ing steps. This is because we choose to optimize using gradient descent as compared to a matrix factorization approach. Thus, our implementation of Artetxe et al. (2016) (MSE Loss only) under performs in comparison to the original baseline while giving improvements with LPL. Gradient descent has been adopted in this case because the loss function can be easily adopted by any neural network architecture in the future as compared to matrix factorization methods that will force architectures to use a two-step training process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Crosslingual Word Alignment", "sec_num": "4.3" }, { "text": "Although we introduce LPL in this paper, models in \u00a74.1 and \u00a74.2 are both trained with a combination of MSE and LPL. This raises the question: how much does LPL contribute to the overall performance of the model? We analyze this question by training a model separately on the STS-B dataset with MSE only and then comparing it with the prior model trained with the combined losses. In Figure 7 , we see that the model trained with MSE + LPL performs better with a maximum of 20.2% relative improvement (sampled dataset size is 30%) over one trained with MSE alone. Additionally when the dataset size is small (less than 10% of the training data), it is observed that variation in accuracy is smaller for the model with the combined loss.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 393, "text": "Figure 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.4" }, { "text": "Additionally, we show the results of ablation studies on SNLI, MNLI (matched) and MNLI (mismatched). As seen in figures 8, 9, 10, LPL's contribution to the model's accuracy in these tasks is lower in comparison to its contribution STS-B, but the variation in accuracy is smaller with it. Thus, we can conclude that LPL makes the model's performance consistent across experiments. the initial alignment (or divergence) between the premise and hypothesis. Also, we observe that a model regularized with MSE and LPL are more likely to reach optimal parameters consistently. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.4" }, { "text": "In this paper, we introduce a new locality preserving loss (LPL) function that learns a linear relation between the given word and its neighbors and then utilizes it to learn a mapping for the neighborhood words that are not a part of the word pairs (parallel corpus). Also, we show how the results of the method are comparable to current supervised models while requiring a reduced set of word pairs to train on. The models are trained with SGD as compared to others that learn with SVD. Additionally, the same alignment loss is applied as a regularizer in a classification task (NLI) and a regression task 7 Check appendix section B for more details.", "cite_spans": [ { "start": 608, "end": 609, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "(STS-B) to demonstrate how it can improve the accuracy of the model over the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this section, we discuss the potential benefits and risks of using a locality preserving loss in the NLP tasks described in this paper. Benefits of LPL. The main motivation of our work is to train models with limited data and showcase the effectiveness of locality preserving loss. When the dataset is small, models overfit the training data unable to generalize and exacerbate language biases. The main benefit of using LPL is that it maintains relationships between a datapoint and its neighbor in the target embedding space, restricting the model from overfitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broader Impact", "sec_num": null }, { "text": "Risks of Utilizing LPL. LPL's ability to maintain relationships between points that are present in the source manifold after projection, can also be a risk. The weights W in equation 3 are learned by constructing a linear map between a datapoint and its neighbors using embeddings extracted from a pretrained model. Thus, any biased relationships prevalent in the pretrained model will become of a component of LPL and ultimately a part of the downstream fine-tuned model too. Hence it is necessary to evaluate the pretrained model thoroughly prior its use with LPL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Broader Impact", "sec_num": null }, { "text": "This section discusses in detail the various experiments conducted in this paper. For each set of experiments, we provide an overview of the dataset used, a description of the dataset (with samples), information about the training set and list of any hyper-parameters that are optimized. The computing infrastructure used is a single Tesla P100-SXM2 GPU to train a single model. The GPU consumption is driven by the base text / word encoding model. GPU usage while training the feedforward network (FFN) is limited. In practice, once the embeddings are extracted from an language encoder (like BERT), multiple FFs can be simultaneously trained on a single GPU. Additional GPUs are used to scale experiments while training models with different loss function combinations and dataset sizes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Experimentation Details", "sec_num": null }, { "text": "In order to understand the impact of Locality Preserving Loss (LPL) while finetuning a model to measure how similar two sentences are, we use the STS-B dataset (Cer et al., 2017) that is widely utilized as a part of the GLUE benchmark (Wang et al., 2018) . STS-B consists of a total of 8628 pairs of sentences of which 5749 are for training, 1500 pairs are part of the dev set (for hyper-parameter tuning) and 1379 are test pairs. The labels are a continuous value between 0 \u2212 5 where 0 represents sentences that are not similar while 5 represents sentences are that have the same syntax and meaning. While training various models, the labels are normalized between 0 and 1. Table 3 shows a few examples of sentence pairs in the dataset.", "cite_spans": [ { "start": 160, "end": 178, "text": "(Cer et al., 2017)", "ref_id": "BIBREF8" }, { "start": 235, "end": 254, "text": "(Wang et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 675, "end": 682, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "A.1 Semantic Text Similarity (STS)", "sec_num": null }, { "text": "Sentence 2 Label A plane is taking off. An air plane is taking off.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence 1", "sec_num": null }, { "text": "A man is spreading shreded cheese on a pizza.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.0", "sec_num": null }, { "text": "A man is spreading shredded cheese on an uncooked pizza.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.0", "sec_num": null }, { "text": "Three men are playing chess.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.8", "sec_num": null }, { "text": "Two men are playing chess.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.8", "sec_num": null }, { "text": "Table 3: Sample sentence pairs from STS-B (Cer et al., 2017) dataset with their corresponding labels. The labels represent subjective human judgements of how similar the sentences are. It is a continuous variable.", "cite_spans": [ { "start": 42, "end": 60, "text": "(Cer et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "2.6", "sec_num": null }, { "text": "A BERT model (Devlin et al., 2018) with an additional 2-layer feedforward network (FFN) (P n ) predicts the similarity score between sentences and is optimized with MSE (between predicted score and label). The FFN consists of 2 hidden layers, each of size 1024. The baseline model (trained only with MSE) has a concatenated input (as shown in Figure 2 baseline). While using an additional alignment loss (MSE or LPL), an additional single hidden layer FFN (A n ) is attached that aligns the two sentence manifolds. The aligned projection of sentence 1 is then concatenated as input to P n (as shown in Figure 2 baseline with alignment). Although this increases the number of parameters in the P n (by 768 x 1024) for models trained with alignment, we experimented with increasing the baseline model with the same number of hidden layer parameters and found the baseline's performance to decrease or remain constant. Hence, the size of the layers is maintained at 1024. Figure 3 shows the pearson correlation between the predicted and actual similarity score. For each sample size on the X-axis, 3-fold cross validation is performed. Each time cross-validation is performed, different training pairs are randomly sampled (without replacement) from the complete dataset and the seed for initializing weights of each layer in the FFN is changed. To maintain consistency across experiments, the seed is maintained constant across trained models.", "cite_spans": [ { "start": 13, "end": 34, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 602, "end": 610, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 969, "end": 977, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "2.6", "sec_num": null }, { "text": "As described in \u00a73.5, the hyperparameters include \u03b3 and \u03b4. \u03b3 is a learning rate that defines how much of the alignment loss functions contribute to the overall loss. In practice, \u03b3 is set to 1.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.2 Hyperparameters", "sec_num": null }, { "text": "Equation 5 uses LPL as a regularizer. We implement this as a contrastive divergence loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.2 Hyperparameters", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i mcd = \u03b3 i max(\u22121.0, \u03b4 y i * L i mse ) (6) L i lcd = \u03b3 i max(\u22121.0, \u03b4 y i * L i lpl )", "eq_num": "(7)" } ], "section": "A.1.2 Hyperparameters", "sec_num": null }, { "text": "The above equations change L mse and L lpp to a contrastive divergence (max margin) loss. We set the maximum margin to 0.1. The margin is manually tuned after optimizing it on validation pairs. The overall learning rate to train the model is 0.0001. The optimizer is RMSProp. Figure 11 : Pearson correlation on semantic text similarity with STS-B dataset for training sample size greater than 50% of the original dataset.", "cite_spans": [], "ref_spans": [ { "start": 276, "end": 285, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.1.2 Hyperparameters", "sec_num": null }, { "text": "\u03b4 is the label specific hyper-parameter that defines how much loss can be contributed by a specific label. As the STS label is a continuous variable, \u03b4 is equal to the label value. This forces sentence pairs with scores that are 5.0 to have maximum loss while pairs with scores that tend towards 0 create a negative squared loss (this moves the embeddings apart) limited up-to the maximum margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.2 Hyperparameters", "sec_num": null }, { "text": "In Figure 11 , we see that the LPL + MSE loss model performs better than the baseline when dataset size is increased. The lowest bound in performance is higher than the upper bound of the baseline. LPLs improvement over the baseline when the entire dataset is used shows that as a whole STS-B may benefit from LPL irrespective of the dataset size. This is because LPL explicitly models a relationship between sentences present in a given sample text's neighborhood, ensuring that those relationships are maintained while computing the similarity between a given sentence pair.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 12, "text": "Figure 11", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.1.3 Performance on Larger Dataset", "sec_num": null }, { "text": "A.2 Natural Language Inference (NLI) \u00a74.2 discusses in detail the experiments with natural language inference. Experiments are conducted on SNLI (Bowman et al., 2015b) and MNLI (Williams et al., 2017) datasets. Stanford's natural language inference dataset contains sentence pairs consisting of a premise and hypothesis. The model predicts if the hypothesis entails or contradicts the premise or if they have a neutral relationship (i.e., the two sentences are not related). The dataset contains 500K training pairs, 10K pairs in the dev set and 10K pairs of sentences for testing. Table 4 shows a few sentence pairs from the dataset.", "cite_spans": [ { "start": 177, "end": 200, "text": "(Williams et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 582, "end": 589, "text": "Table 4", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "A.1.3 Performance on Larger Dataset", "sec_num": null }, { "text": "MNLI (Williams et al., 2017) is an extension of the SNLI dataset where the sentences are from mul-Premise Hypothesis Label A person on a horse jumps over a broken down airplane.", "cite_spans": [ { "start": 5, "end": 28, "text": "(Williams et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "A.1.3 Performance on Larger Dataset", "sec_num": null }, { "text": "A person is at a diner, ordering an omelette.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1.3 Performance on Larger Dataset", "sec_num": null }, { "text": "A person on a horse jumps over a broken down airplane.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contradiction", "sec_num": null }, { "text": "A person is outdoors, on a horse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contradiction", "sec_num": null }, { "text": "They are smiling at their parents tiple genres such as letters promoting non-profit organizations, government reports and documents as well as fictional books. This expands the variation in language used in sentences, reducing the model's ability to memorize sentence pairs and their labels. Table 5 contains example pairs from the MNLI training data.", "cite_spans": [], "ref_spans": [ { "start": 292, "end": 299, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Children smiling and waving at camera", "sec_num": null }, { "text": "Hypothesis Label Conceptually cream skimming has two basic dimensions -product and geography.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Premise", "sec_num": null }, { "text": "are what make cream skimming work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Product and geography", "sec_num": null }, { "text": "How do you know? All this is their information again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral", "sec_num": null }, { "text": "This information belongs to them. The model's architecture is described in Figure 2 . The sentence embeddings are extracted from BERT (Devlin et al., 2018) and the NLI classification layers are trained separately. For experiments in \u00a74.2, the trained model is a 2-layer FNN, each hidden layer with a size of 4096. The activation function is a Leaky RELU. Similar to the experiments in appendix A.1.1, we perform 3-fold crossvalidation where the training dataset is resampled and the results presented in \u00a74.2 are an average over these 3 runs.", "cite_spans": [ { "start": 134, "end": 155, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Neutral", "sec_num": null }, { "text": "The models are trained with a learning rate of 0.0001 and the optimizer is RMSProp. In order to train the model with LPL and MSE, \u03b4 is configured. In comparison to a continuous \u03b4 variable used in the STS task, in NLI, the \u03b4 is constant for each label. For each dataset, the \u03b4 parameter is set after manual testing on the dev set. Sentence pairs that have a neutral label tend to be sentences Figure 13 : Accuracy of alignment regularization on MNLI dataset with a varying number of matched in-genre samples, from 5% to 100% of the training dataset only (total: 300K samples).", "cite_spans": [], "ref_spans": [ { "start": 392, "end": 401, "text": "Figure 13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.2.1 Hyperparameters", "sec_num": null }, { "text": "that are dissimilar. The semantic difference can be based on differing subjects, predicates or objects in the sentence, the content / topic or even genre of the sentence. Hence while training the model, these projections are separated with the alignment loss (and have a negative \u03b4) rather than converged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2.1 Hyperparameters", "sec_num": null }, { "text": "From our initial experiments, we found that lower \u03b4 on entailment and contradiction yielded no change in accuracy from baseline. The accuracy increases when a higher positive \u03b4 scalar multiple is attached to entailment & contradiction, and a negative scalar is multiplied to the loss generated for neutral label samples. Figure 14 : Accuracy of alignment regularization on MNLI dataset with a varying number of mismatched out-of-genre samples, from 5% to 100% of the training dataset only (total: 300K samples).", "cite_spans": [], "ref_spans": [ { "start": 321, "end": 330, "text": "Figure 14", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.2.1 Hyperparameters", "sec_num": null }, { "text": "Apart from better accuracy when the training dataset is small, in Figures 12,13 , and 14, we observe the accuracy of models trained with the alignment loss using MSE alone and another in combination with LPL converge as the number of training samples increases. This happens because of the way k-nearest neighbor (k-NN) is computed for each embedding in the source domain. We use BERT to generate the embedding of each sentence in the SNLI and MNLI dataset. Because BERT itself is trained on millions of sentences from Wikipedia and Book Corpus, searching for k-NN embeddings for each sentence from this dataset (for each sentence in the training sample) is computationally difficult. In order to make the k-NN search tractable, neighbors are extracted from the dataset itself (500K sentences in SNLI and 300K sentences in MNLI). This impacts the overall improvement in accuracy using LPL as it is not a perfect reconstruction of the datapoint (using its neighbors). Initially when the dataset is small the neighbors are unique.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 79, "text": "Figures 12,13", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.2.2 Performance on Larger Dataset", "sec_num": null }, { "text": "As the dataset size increases, the unique neighbors reduce and are subsumed by the overall supervised dataset (hence MSE begins to perform better). The impact of LPL reduces as the number of unique neighbors decreases and the entire dataset is used to train the model. This is unlikely to happen when NNs from a larger unrelated text corpus reconstruct local manifolds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2.2 Performance on Larger Dataset", "sec_num": null }, { "text": "For experiments with cross lingual word alignment, we use the parallel corpus from Dinu et al. (2014) . Baseline Baseline + MSE Baseline + MSE + LPP Figure 15 : Standard deviation in performance. The graphs show the standard deviation in performance over 3 runs when the SNLI training dataset size varies In each case, the model trained with LPL + MSE + task loss has the least variation in performance, while the model trained with task loss + MSE or the task loss alone having a higher variance in performance. when the STS-B training dataset size varies. In each case, the model trained with LPL + MSE + task loss has the least variation in performance, while the model trained with task loss + MSE or the task loss alone having a higher variance in performance.", "cite_spans": [ { "start": 83, "end": 101, "text": "Dinu et al. (2014)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 149, "end": 158, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A.3 Crosslingual Word Alignment", "sec_num": null }, { "text": "(2014) and extends results in Table 1a . The dataset contains word pairs collected from the Europarl corpus. The bilingual dictionary has 5000 word pairs for training and 1500 word pairs for testing and evaluation. The language pairs in the dictionary include English \u2194 Italian, English \u2194 German, English \u2194 Finnish and English \u2194 Spanish. Figure 18 shows the pipeline to perform cross-lingual word alignment.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 38, "text": "Table 1a", "ref_id": "TABREF3" }, { "start": 338, "end": 347, "text": "Figure 18", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "Method EN - IT Mikolov et al. (2013) 34.93 Xing et al. (2015) 36.87 Faruqui and Dyer (2014) 37.80 Artetxe et al. (2016) 39.27 Locality Preserving Loss (Our Work)", "cite_spans": [ { "start": 12, "end": 36, "text": "IT Mikolov et al. (2013)", "ref_id": "BIBREF19" }, { "start": 43, "end": 61, "text": "Xing et al. (2015)", "ref_id": "BIBREF37" }, { "start": 68, "end": 91, "text": "Faruqui and Dyer (2014)", "ref_id": "BIBREF14" }, { "start": 98, "end": 119, "text": "Artetxe et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "43.33 Once the initial pretrained word vectors are selected, preprocessing can be applied. Preprocessing functions include unit normalization, whitening and z-normalization ( Table 1b) . observe that deep neural networks are perform word alignment poorly in comparison to a linear tranformation (Y = W X). The linear transformation matrix W is learned by minimizing the sum of squared loss between Y and W X. Optimal parameters for W are learned by minimizing the loss with singular value decomposition (SVD) (Artetxe et al., 2016) .", "cite_spans": [ { "start": 509, "end": 531, "text": "(Artetxe et al., 2016)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 173, "end": 184, "text": "( Table 1b)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "Locality Preserving Alignment combines preprocessing functions and training W . Also, parameters of W are learned using SGD, showcasing that LPL can be added to both linear and non-linear transformations. To find the translated word in the target embedding space, multiple inference mechanism are available such as nearest neighbor (NN), inverted softmax (Smith et al., 2017) , and crossdomain similarity scaling (CSLS) (Conneau et al., 2017) . We use CSLS to find the translated word in the target language.", "cite_spans": [ { "start": 355, "end": 375, "text": "(Smith et al., 2017)", "ref_id": "BIBREF29" }, { "start": 420, "end": 442, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "In our experiments, we use original pretrained word embeddings obtained from the dataset (Dinu et al., 2014) . The models are trained with a learning rate of 0.001. The \u03b2 parameter is the learning rate specific to LPL (from equation 4) and is set to 0.7 and is manually tuned against the validation dataset. Table 8 shows the neighbors for the word \"windows\" from the source embedding (English) and the target embedding (Italian). Compared to previous methods that look at explicit mapping of points between the two spaces, LPL tries to maintain the relations between words and their neighbors in the source domain while projecting them into the target domain.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Dinu et al., 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 308, "end": 315, "text": "Table 8", "ref_id": "TABREF17" } ], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "In this example, the word \"nt/2000\" is not a part of the supervised pairs available and will not have an explicit projection in the target domain to be optimized without a locality preserving loss. Figure 16 and 17 chart the variation in evaluated performance on each model on tasks STS-B and MNLI when the size of the training dataset varies. In each case, a model optimized with the task specific loss has higher variation in performance than models that are optimized with additional losses MSE and LPL. The variation for baseline models are higher when the size of the dataset is small. Figure 15 shows the variation in test accuracy across runs on the SNLI dataset. The variation in accuracy is highest for the baseline model when the size of the dataset is small. For example, when the baseline is trained with 1000 samples (0.002% of the dataset), the variation in accuracy is 8.73%. Similarly, when the baseline is trained with 50 samples (0.01% of the dataset) in STS-B task, the variation in accuracy is 6.02% (11.63% when sample size is 250). The variation in accuracy reduces as the sample size increases.", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 207, "text": "Figure 16", "ref_id": "FIGREF0" }, { "start": 591, "end": 600, "text": "Figure 15", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Number of Samples Standard Deviation (Training", "sec_num": null }, { "text": "This occurs because when the subset of data randomly sampled, the quality of instances sampled has a large impact on the final performance of the model. Thus, measuring the consistency of the model's performance over multiple runs is a vital evaluation criteria (as much as the accuracy itself).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Measuring Performance Consistency", "sec_num": null }, { "text": "Hence, training the model with an alignment loss, i.e., with locality preservation (L mse and L lpl ), empirically guarantees that the model reaches nearoptimal performance when the size of the supervised set is limited and that it has a narrow bound as compared to the baseline model trained without Baseline Baseline + MSE Baseline + MSE + LPP Figure 18: Crosslingual Word Alignment Pipeline (CLA). As described in , the CLA pipeline includes choosing an existing set of pretrained vectors that can be preprocessed. Table 1b contains a list of preprocessing functions that can be applied. Artetxe et al. (2018) evaluate each preprocessing method in detail. Alignment involves learning the transform matrix W and inference involves finding the translated word. Figure 19 : Locally Linear Embedding (LLE). The red point is a vector Xi that is reconstructed by a plane of its neighbors in blue. This linear plane is functionally represented by W \u2208 R n\u00d7k for n points, each having k neighbors.", "cite_spans": [ { "start": 591, "end": 612, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 518, "end": 526, "text": "Table 1b", "ref_id": "TABREF3" }, { "start": 762, "end": 771, "text": "Figure 19", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "B Measuring Performance Consistency", "sec_num": null }, { "text": "them. While performing these experiments, not only are the vocabularies randomly initialized, but also parameters too, making the model less dependent on how the training pairs are sampled from the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Measuring Performance Consistency", "sec_num": null }, { "text": "Our work is inspired from locally linear embedding (LLE) (Roweis and Saul, 2000) . As discussed in \u00a72.1, in LLE, the datapoints are assumed to have a linear relation with their neighbors (Figure 19 ). The reduced dimension projection of each vector is learned through a two step process: (a) Learn the linear relationship through a reconstruction loss (b) Use relation to learn low dimension representation. Assume each point in the manifold has k neighbors N i . The reconstruction loss is:", "cite_spans": [ { "start": 57, "end": 80, "text": "(Roweis and Saul, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 187, "end": 197, "text": "(Figure 19", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "C Locally Linear Embedding", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L reconstruct = i X i \u2212 j\u2208N i W ij X j 2 ,", "eq_num": "(8)" } ], "section": "C Locally Linear Embedding", "sec_num": null }, { "text": "where X i is the datapoint and the X j represents each neighbor. An additional constraint is imposed on the weights ( ij W ij = 1) to make the transform scale invariant. In (8) the weights W are an N \u00d7 K matrix in a dataset of N points (i.e., each point has its own weights). Given a learned W from (8), we learn Y i (a projection for X i ) by minimizing the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Locally Linear Embedding", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i Y i \u2212 j\u2208N i W ij Y j 2 ,", "eq_num": "(9)" } ], "section": "C Locally Linear Embedding", "sec_num": null }, { "text": "Y i is typically with reduced dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Locally Linear Embedding", "sec_num": null }, { "text": "Low resource has two interpretations i.e. one where corpus size to generate unsupervised pretrained embeddings is small and the other where the parallel corpus for alignment is minimal. We experiment with the later here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Appendix C for an in-depth explanation of LLE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Appendix A.1 provides details about the dataset and FFN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://opus.lingfil.uu.se/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank our anonymous reviewers from the Adapt-NLP Workshop and previous NLP conferences for their constructive reviews. We thank Prof. Konstantinos Kalpakis for his insights about manifold alignment and locality preservation methods. The hardware used in our computational studies is part of the UMBC HPCF facility and UMBC's CARTA lab. This material is also based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S.Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either express or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embed- ding mappings with a multi-step framework of lin- ear transformations. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence (AAAI-18).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "authors": [ { "first": "Mikhail", "middle": [], "last": "Belkin", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" } ], "year": 2002, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "585--591", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Belkin and Partha Niyogi. 2002. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585-591.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "One-sided unsupervised domain mapping", "authors": [ { "first": "Sagie", "middle": [], "last": "Benaim", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "752--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sagie Benaim and Lior Wolf. 2017. One-sided unsu- pervised domain mapping. In Advances in Neural Information Processing Systems, pages 752-762.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Think globally, embed locally-locally linear meta-embedding of words", "authors": [ { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Kohei", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Ken-Ichi", "middle": [], "last": "Kawarabayashi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.06671" ] }, "num": null, "urls": [], "raw_text": "Danushka Bollegala, Kohei Hayashi, and Ken-ichi Kawarabayashi. 2017. Think globally, embed locally-locally linear meta-embedding of words. arXiv preprint arXiv:1709.06671.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Aligning mixed manifolds", "authors": [ { "first": "Thomas", "middle": [], "last": "Boucher", "suffix": "" }, { "first": "", "middle": [], "last": "Carey", "suffix": "" }, { "first": "Melinda", "middle": [ "Darby" ], "last": "Sridhar Mahadevan", "suffix": "" }, { "first": "", "middle": [], "last": "Dyar", "suffix": "" } ], "year": 2015, "venue": "AAAI", "volume": "", "issue": "", "pages": "2511--2517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Boucher, CJ Carey, Sridhar Mahadevan, and Melinda Darby Dyar. 2015. Aligning mixed mani- folds. In AAAI, pages 2511-2517.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.05326" ] }, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015a. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015b. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.00055" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.04087" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multidimensional scaling", "authors": [ { "first": "F", "middle": [], "last": "Trevor", "suffix": "" }, { "first": "", "middle": [], "last": "Cox", "suffix": "" }, { "first": "A", "middle": [ "A" ], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Cox", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor F Cox and Michael AA Cox. 2000. Multidimen- sional scaling. CRC press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generalized unsupervised manifold alignment", "authors": [ { "first": "Zhen", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Shiguang", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Xilin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2429--2437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Cui, Hong Chang, Shiguang Shan, and Xilin Chen. 2014. Generalized unsupervised manifold alignment. In Advances in Neural Information Pro- cessing Systems, pages 2429-2437.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving zero-shot learning by mitigating the hubness problem", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6568" ] }, "num": null, "urls": [], "raw_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Ba- roni. 2014. Improving zero-shot learning by mit- igating the hubness problem. arXiv preprint arXiv:1412.6568.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving vector space word representations using multilingual correlation", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "462--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "How to (properly) evaluate crosslingual word embeddings", "authors": [ { "first": "Goran", "middle": [], "last": "Glavas", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Litschko", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" } ], "year": 2019, "venue": "On strong baselines, comparative analyses, and some misconceptions", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.00508" ] }, "num": null, "urls": [], "raw_text": "Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate cross- lingual word embeddings: On strong baselines, com- parative analyses, and some misconceptions. arXiv preprint arXiv:1902.00508.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Locality preserving projections", "authors": [ { "first": "Xiaofei", "middle": [], "last": "He", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Niyogi", "suffix": "" } ], "year": 2004, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "153--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei He and Partha Niyogi. 2004. Locality preserv- ing projections. In Advances in neural information processing systems, pages 153-160.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Norma: Neighborhood sensitive maps for multilingual word embeddings", "authors": [ { "first": "Ndapandula", "middle": [], "last": "Nakashole", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "512--522", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ndapandula Nakashole. 2018. Norma: Neighborhood sensitive maps for multilingual word embeddings. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 512-522.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10084" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "2020. A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.12327" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. arXiv preprint arXiv:2002.12327.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Nonlinear dimensionality reduction by locally linear embedding. science", "authors": [ { "first": "T", "middle": [], "last": "Sam", "suffix": "" }, { "first": "Lawrence K", "middle": [], "last": "Roweis", "suffix": "" }, { "first": "", "middle": [], "last": "Saul", "suffix": "" } ], "year": 2000, "venue": "", "volume": "290", "issue": "", "pages": "2323--2326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam T Roweis and Lawrence K Saul. 2000. Nonlin- ear dimensionality reduction by locally linear em- bedding. science, 290(5500):2323-2326.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A survey of cross-lingual embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vulic, and Anders S\u00f8gaard. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A survey of cross-lingual word embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2019, "venue": "Journal of Artificial Intelligence Research", "volume": "65", "issue": "", "pages": "569--631", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:569-631.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Learning internal representations by error propagation", "authors": [ { "first": "Geoffrey", "middle": [ "E" ], "last": "David E Rumelhart", "suffix": "" }, { "first": "Ronald J", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical report, DTIC Docu- ment.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ridge regression, hubness, and zero-shot learning", "authors": [ { "first": "Yutaro", "middle": [], "last": "Shigeto", "suffix": "" }, { "first": "Ikumi", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Kazuo", "middle": [], "last": "Hara", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Shimbo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2015, "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "135--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. 2015. Ridge regres- sion, hubness, and zero-shot learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 135-151. Springer.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.03859" ] }, "num": null, "urls": [], "raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "On the limitations of unsupervised bilingual dictionary induction", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.03620" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vuli\u0107. 2018. On the limitations of unsupervised bilingual dictionary induction. arXiv preprint arXiv:1805.03620.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Extracting and composing robust features with denoising autoencoders", "authors": [ { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Pierre-Antoine", "middle": [], "last": "Manzagol", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th international conference on Machine learning", "volume": "", "issue": "", "pages": "1096--1103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In Proceedings of the 25th international conference on Machine learning, pages 1096-1103. ACM.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07461" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Generalized autoencoder: A neural network framework for dimensionality reduction", "authors": [ { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yizhou", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition workshops", "volume": "", "issue": "", "pages": "490--497", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Wang, Yan Huang, Yizhou Wang, and Liang Wang. 2014. Generalized autoencoder: A neural network framework for dimensionality reduction. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 490-497.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05426" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Normalized word embedding and orthogonal transform for bilingual word translation", "authors": [ { "first": "Chao", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1006-1011.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "authors": [ { "first": "Jun-Yan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Taesung", "middle": [], "last": "Park", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" }, { "first": "Alexei", "middle": [ "A" ], "last": "Efros", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2223--2232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Pro- ceedings of the IEEE international conference on computer vision, pages 2223-2232.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Quality of alignment with different types of losses. A, B are two words in two word embedding manifolds M1", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "t 1 .. w t m } are sets of units in each vocabulary of size n and m. The distributed representations of the units in each manifold are M s = {m s 1 ... m s n } and M t = {m t 1 .", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Use of alignment loss for the NLI and STS-B", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Pearson correlation of alignment regularization on STS-B up to 50% of the training dataset only (total: 4700 samples). The models are trained with MSE and MSE + LPL.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Accuracy of alignment regularization on STS-B up to 5% of the training dataset only. The models are trained with MSE and MSE + LPL.", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "fier for a sentence pair representation. P and H are the sample premise and hypothesis pair. The original label is Entailment. (nP, nH) are the nearest neighbors of this sentence pair's representation from the penultimate layer of each classifier, i.e., baseline (B), baseline + MSE (BM), and baseline + MSE + LPL (BML). 1 & 2 are nearest neighbors from the baseline, 3 & 4 are when trained with MSE only and 5 & 6 are when trained with MSE and LPL.", "type_str": "figure", "num": null, "uris": null }, "FIGREF7": { "text": "Standard deviation in pearson correlation. The graphs show the standard deviation in correlation over 3 runs", "type_str": "figure", "num": null, "uris": null }, "FIGREF8": { "text": "Standard deviation in accuracy of alignment on MNLI. The graph shows standard deviation in accuracy when size of the training sample set differs (total: 300K) for the baseline, baseline + MSE and baseline + MSE + LPL models: LPL yields more consistently optimal systems. (a) Standard deviation in accuracy when tested with in in-genre sentence pairs (matched MNLI) (b) Standard deviation in accuracy when tested with in out-of-genre sentence pairs (mismatched MNLI)", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "num": null, "html": null, "text": "). The baseline model is the same as Siamese-BERT", "content": "
Model Performance on STS-B (different losses)
Loss Combinations
40
Aa AaBaseline
Baseline + MSE + LPL34.56
35
Pearson Correlation11.75 14.61 10.74 10 15 20 25 3013.88 16.0420.61 24.8620.24 27.3118.18
8.13
5
0
051015202530
% Sampled from Training Data
Figure 3: Pearson correlation on semantic text similarity
with STS-B dataset. Locality Preserving Loss (LPL) improves
sentence pair similarity in all data sizes from small (100 sam-
ples or 2% of total data) to 50% of the dataset.
", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "text": "Accuracy of alignment regularization on SNLI.The graph shows the accuracy, averaged across 3 runs, for differing size of training samples up to 5% of the training dataset only (total: 500K). Accuracy of alignment regularization on MNLI dataset with a varying number of matched in-genre samples, up to 5% of the training dataset only (total: 300K samples).", "content": "
Model Performance on SNLI (different losses)
Loss Combinations
AaAaBaseline64.93
65Baseline + MSE + LPL
62.74
61.24
61.7862.4
60
Accuracy50 5554.8 54.9953.0957.08
4545.9
40
0.511.522.533.5
% Sampled from Training Data
46.05 Baseline + MSE + LPL 46.51 Baseline Model Performance on MNLI Matched (different losses) 47.91 51.76 51.39 55.67 56.62 Figure 4: 44.38 45.2 44 46 48 50 52 54 56 58 Aa Aa Accuracy Loss Combinations55.24
42
12345
% Sampled from Training Data
Figure 5:
", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "text": "). SNLI consists of 500K sentence pairs while MNLI contains about 433k pairs. The MNLI dataset contains two test datasets. The matched dataset contains sentences that are sampled from the same genres as the training samples while mismatched samples test the models accuracy for out of genre text.Figures 4, 5, and 6 show the accuracy of the models when optimized with a standard cross-entropy loss (baseline) and with MSE and LPL combined. The accuracy is measured when the size of the We compare our method (bottom row: LPL) on cross-lingual word alignment. In comparison to Artetxe et al.(2018), we use cross-domain similarity local scaling (CSLS)(Conneau et al., 2017) to retrieve the translated word. The method column lists different losses/methods used to learn the projection: NN is nearest neighbor search while IS is inverted softmax. Many mapping methods use additional transformation steps.", "content": "
Method MSE, Train T\u2192S (Shigeto et al., 2015) MSE (Artetxe et al., 2016) MSE: IS (Smith et al., 2017) MSE: NN (Artetxe et al., 2018) MSE: IS (Artetxe et al., 2018) MSE LPL+MSE: CSLS (a) Trans. Trans. Optim. EN-IT EN-DE EN-FI EN-ES S1, S2 Linear 41.53 43.07 31.04 33.73 S0, S2 Linear 39.27 41.87 30.62 31.40 S0, S2, S5 Linear 41.53 43.07 31.04 33.73 S0-S5 Linear 44.00 44.27 32.94 36.53 S0-S5 Linear 45.27 44.13 32.94 36.60 S0, S2 SGD 39.67 45.47 29.42 35.3 S0, S2 SGD 43.33 46.07 33.50 35.13 S0 S1 S2 S3 S4 S5Desc. Embedding normalization (unit / center) Whitening Orthogonal Mapping Re-weighting De-Whitening Dimensionality ReductionBackprop? Yes No Yes No No No
", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "text": "The accuracy of the locality preserving method.Table 1alists 6 high-performing supervised/semi-supervised baselines; table 1b lists the transformations used in these methods and how easily those transformations can be used with back-propagation.", "content": "
AaBaseline
58Baseline + MSE + LPL57.25
56.45
56
55.36
54
52.92
53.14
52
50
4847.5648.57
46.61
4646.72
4444.73
42
12345
", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "text": "while having fewer preprocess-", "content": "
Model Performance on STS-B (different losses)
Loss Combinations
40Aa
Baseline + MSE
Aa
Baseline + MSE + LPL34.56
35
Pearson Correlation13.08 14.61 10.74 15 20 25 3016.0413.621.12 24.8624.17 27.3128.75
108.24
5
051015202530
% Sampled from Training Data
", "type_str": "table" }, "TABREF5": { "num": null, "html": null, "text": "Accuracy of alignment regularization on SNLI up to 5% of the training dataset only. The models are trained with MSE and MSE + LPL.", "content": "
Model Performance on SNLI (different losses)
Loss Combinations
66AaAaBaseline + MSE + LPL Baseline + MSE64.93
64.94
64
62.74
6261.2462.29
Accuracy58 6060.6
5654.8 54.99
55.24
54
53.16
52
50
0.511.522.533.5
% Sampled from Training Data
49.79 Model Performance on MNLI Matched (different losses) 54.28 46.51 51.39 55.67 56.62 Baseline + MSE Baseline + MSE + LPL Figure 8: 45.92 45.2 46 48 50 52 54 56 58 Aa Aa Accuracy Loss Combinations56.59
4444.19
42
40
12345
% Sampled from Training Data
", "type_str": "table" }, "TABREF6": { "num": null, "html": null, "text": "5% of the training dataset only. The models are trained with MSE and MSE + LPL.", "content": "
shows the 2 nearest neighbors for a premise-
hypothesis pair (P, H) taken from each classifier,
i.e., baseline (B), baseline + MSE (BM), and base-
line + MSE + LPL (BML) after they are trained
(the dataset size is small at just 2000 samples).
Since NLI is a reasoning task, the sentence pair
representations ideally will cluster around a pat-
tern that represents Entailment or Contradiction or
Neutral. Instead what is observed is that when the
samples are limited, sentence pair representations
have NNs that are syntactically similar (NNs 1 and
2) for the baseline model. The predicted labels for
the NN pairs are not clustered into entailment but
are a combination of all 3. This problem is reduced
for models trained with BM and BML (NNs 3 and
4 for BM, NNs 5 and 6 for BML). The predicted
labels of the NNs are clustered into entailment only.
The sentence pair representations cluster contain-
ing a single label suggest the models are better at
extracting a pattern for entailment (and improv-
ing the model's ability to reason). This semantic
clustering of representations can be attributed to
", "type_str": "table" }, "TABREF7": { "num": null, "html": null, "text": "", "content": "", "type_str": "table" }, "TABREF9": { "num": null, "html": null, "text": "Samples from theSNLI (Bowman et al., 2015b) dataset. Each pair consists of two sentences and a label with one of three values entailment, neutral, contradiction.", "content": "
", "type_str": "table" }, "TABREF10": { "num": null, "html": null, "text": "Samples from the SNLI (Bowman et al., 2015b) dataset. Each pair consists of two sentences and label with one of three values entailment, neutral, contradiction", "content": "
", "type_str": "table" }, "TABREF12": { "num": null, "html": null, "text": "Samples from theSNLI (Bowman et al., 2015b) dataset. Each pair consists of two sentences and label with one of three values entailment, neutral, contradiction", "content": "
Model Performance on MNLI Mismatched (different losses)
Loss Combinations
68
AaAaBaseline + MSE + LPL Baseline66.3367
66
64.965.6765.96
6462.8763.5964.18
61.87
Accuracy62 57.25 60.04 58 6058.6 5960.7462.32
56
55.36
54
52
020406080100
% Sampled from Training Data
", "type_str": "table" }, "TABREF13": { "num": null, "html": null, "text": "", "content": "
compares our method with other methods
such as Xing et al. (2015) and Faruqui and Dyer
", "type_str": "table" }, "TABREF15": { "num": null, "html": null, "text": "Accuracy of various models predicting word translated from English to Italian.", "content": "", "type_str": "table" }, "TABREF17": { "num": null, "html": null, "text": "Neighbors of the word \"windows\" in source domain (English), target domain (Italian) and the combined vector space with both English & Italian vocabulary. The Aligned neighborhood contains a mix of the English and Italian words, not just the translation.", "content": "
", "type_str": "table" } } } }