{ "paper_id": "I17-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:38:17.521558Z" }, "title": "Word Ordering as Unsupervised Learning Towards Syntactically Plausible Word Representations", "authors": [ { "first": "Noriki", "middle": [], "last": "Nishida", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "nishida@nlab.ci.i.u-tokyo.ac.jp" }, { "first": "Hideki", "middle": [], "last": "Nakayama", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": {} }, "email": "nakayama@nlab.ci.i.u-tokyo.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The research question we explore in this study is how to obtain syntactically plausible word representations without using human annotations. Our underlying hypothesis is that word ordering tests, or linearizations, is suitable for learning syntactic knowledge about words. To verify this hypothesis, we develop a differentiable model called Word Ordering Network (WON) that explicitly learns to recover correct word order while implicitly acquiring word embeddings representing syntactic knowledge. We evaluate the word embeddings produced by the proposed method on downstream syntax-related tasks such as partof-speech tagging and dependency parsing. The experimental results demonstrate that the WON consistently outperforms both order-insensitive and order-sensitive baselines on these tasks.", "pdf_parse": { "paper_id": "I17-1008", "_pdf_hash": "", "abstract": [ { "text": "The research question we explore in this study is how to obtain syntactically plausible word representations without using human annotations. Our underlying hypothesis is that word ordering tests, or linearizations, is suitable for learning syntactic knowledge about words. To verify this hypothesis, we develop a differentiable model called Word Ordering Network (WON) that explicitly learns to recover correct word order while implicitly acquiring word embeddings representing syntactic knowledge. We evaluate the word embeddings produced by the proposed method on downstream syntax-related tasks such as partof-speech tagging and dependency parsing. The experimental results demonstrate that the WON consistently outperforms both order-insensitive and order-sensitive baselines on these tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Distributed word representations have been successfully utilized to transfer lexical knowledge to downstream tasks in a semi-supervised manner, and well known to benefit various applications (Turian et al., 2010; Collobert et al., 2011; Socher et al., 2011) . As different applications generally require different features, it is crucial to choose representations suitable for target downstream tasks.", "cite_spans": [ { "start": 191, "end": 212, "text": "(Turian et al., 2010;", "ref_id": "BIBREF25" }, { "start": 213, "end": 236, "text": "Collobert et al., 2011;", "ref_id": "BIBREF4" }, { "start": 237, "end": 257, "text": "Socher et al., 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The research question we want to explore in this study is how to obtain syntactically plausible word representations without human annotations, with a focus on syntax-related tasks (parsing, etc.) . Whereas a variety of approaches related to semantic word embeddings have been pro- Figure 1 : Illustration of the word ordering task. The goal of the word ordering task is to recover an original order given a set of shuffled tokens. The figure shows an example where original sentence is \"this is a short sentence.\" To correctly reorder the tokens, syntactic knowledge about words (e.g. grammatical classes of words and their possible relations) is indispensable. In this study, we explore how well the word ordering task can be an objective to obtain syntactic word representations.", "cite_spans": [ { "start": 181, "end": 196, "text": "(parsing, etc.)", "ref_id": null } ], "ref_spans": [ { "start": 282, "end": 290, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "posed (Mikolov et al., 2013a,b; Pennington et al., 2014) , it still remains unclear how we should obtain syntactic word embeddings from unannotated corpora.", "cite_spans": [ { "start": 6, "end": 31, "text": "(Mikolov et al., 2013a,b;", "ref_id": null }, { "start": 32, "end": 56, "text": "Pennington et al., 2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Word ordering tests, or linearizations, are commonly used to evaluate students' language proficiency. Suppose that we are given a set of randomly shuffled tokens {\"a\", \"is,\" \"sentence,\" \"short,\" \"this,\" \".\"}. In this case we can easily recover the original order: \"this is a short sentence.\" We consider this doable thanks to our knowledge about grammatical classes (e.g., partof-speech (POS) tags) of words and their possible relations. We depict the above explanation in Figure 1. Of course, it might not be necessary for machines to mimic exactly the same reasoning pro-cess in humans. However, syntactic knowledge about words is crucial for both humans and machines to solve the word ordering task.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 479, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by this observation, in this study, we develop an end-to-end model called the Word Ordering Network (WON) that explicitly learns to recover correct word orders while implicitly acquiring word embeddings representing syntactic information. Our underlying hypothesis is that the word ordering task can be an objective for learning syntactic knowledge about words. The WON receives a set of shuffled tokens and first transforms them independently to low-dimensional continuous vectors, which are then aggregated to produce a single summarization vector. We formalize the word ordering task as a sequential prediction problem of a permutation matrix. We use a recurrent neural network (RNN) (Elman, 1990) with long short-term memory (LSTM) units (Hochreiter and Schmidhuber, 1997) and a soft attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) that constructs rows of permutation matrices sequentially conditioned on summarization vectors.", "cite_spans": [ { "start": 696, "end": 709, "text": "(Elman, 1990)", "ref_id": "BIBREF5" }, { "start": 751, "end": 785, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF10" }, { "start": 817, "end": 840, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 841, "end": 860, "text": "Luong et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate the proposed word embeddings on downstream syntax-related tasks such as POS tagging and dependency parsing. The experimental results demonstrate that the WON outperforms both order-insensitive and order-sensitive baselines, and successfully yields the highest performance. In addition, we also evaluate the WON on traditional word-level benchmarks, such as word analogy and word similarity tasks. Combined with semantics-oriented embeddings by a simple finetuning technique, the WON gives competitive or better performances than the other baselines. Interestingly, we find that the WON has a potential to refine and improve semantic features. Moreover, we qualitatively analyze the feature space produced by the WON and find that the WON tends to capture not only syntactic but also semantic regularities between words. The source code of this work is available online. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we formulate the WON which implicitly acquires syntactic word embeddings through learning to solve word ordering problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "1 https://github.com/norikinishida/won", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "Given a set of shuffled tokens X = {w 1 , . . . , w N }, the WON first transforms every single symbol w c into a low-dimensional continuous vector, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e c = F (w c ) \u2208 R D ,", "eq_num": "(1)" } ], "section": "Embedding Layer", "sec_num": "2.1" }, { "text": "where F is a learnable function. Please note that the number of tokens N in the input X can vary in the word ordering task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "2.1" }, { "text": "To perform reordering on a set of shuffled embeddings {e 1 , . . . , e N }, we aggregate the embeddings and compute a single summarization vector. The aggregation function is a sum of word embeddings followed by a non-linear transformation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregation", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = tanh(W a N c=1 e c + b a ) \u2208 R D ,", "eq_num": "(2)" } ], "section": "Aggregation", "sec_num": "2.2" }, { "text": "where W a \u2208 R D\u00d7D and b a \u2208 R D are a projection matrix and bias vector, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Aggregation", "sec_num": "2.2" }, { "text": "We formalize a reordering problem as a prediction task of a permutation matrix. A permutation matrix is a square binary matrix and every row and column contains exactly one entry of 1 and 0s elsewhere. The leftmultiplication of a matrix E \u2208 R N \u00d7D by a permutation matrix P \u2208 R N \u00d7N rearranges the rows of the matrix E, e.g. \uf8eb", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8ec \uf8ec \uf8ed e 1 e 2 e 3 e 4 \uf8f6 \uf8f7 \uf8f7 \uf8f8 = P E (3) = \uf8eb \uf8ec \uf8ec \uf8ed 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 \uf8eb \uf8ec \uf8ec \uf8ed e 3 e 1 e 4 e 2 \uf8f6 \uf8f7 \uf8f7 \uf8f8 .", "eq_num": "(4)" } ], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "Equation 4 gives an example where E = (e 3 , e 1 , e 4 , e 2 ) , and the original sentence (correct order) is w 1 , w 2 , w 3 , w 4 . In the word ordering task, one of the issues in predicting permutation matrices is that the number of tokens N changes according to the variable lengths of input sentences. Therefore, it is impossible to define and train learning models that have fixed-dimensional outputs such as multi-layer perceptrons. Figure 2 : Visualization of our approach to sequentially predict a permutation matrix P \u2208 R N \u00d7N . In this case, we are given N = 4 shuffled tokens (w 1 , w 2 , w 3 , w 4 ). We first independently embeds each symbol to dense vectors (e 1 , e 2 , e 3 , e 4 ). Then, by using an RNN and a soft attention mechanism, we sequentially constructs the rows of the permutation matrix P = (p 1 , p 2 , p 3 , p 4 ) for N steps through a scoring function. The vector h r \u2208 R D denotes the r-th hidden state of the RNN. One can interpret p r as a selective probability distribution over the input tokens. For simplicity, in this figure, we ignore the projection matrix in the scoring function (Eq. 8).", "cite_spans": [], "ref_spans": [ { "start": 440, "end": 448, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "Recently, Vinyals et al. (2015) proposed the Pointer Networks (PtrNets) that were successfully applied to geometric sorting problems. Inspired by the PtrNet, we develop an LSTM (Hochreiter and Schmidhuber, 1997) with a soft attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) . The LSTM constructs rows of a permutation matrix P = (p 1 , . . . , p N ) conditioned on a set of word embeddings {e 1 , . . . , e m } calculated by Equation 1. If N c=1 p r,c = 1 holds, one can interpret p r,c as the probability of the token w c to be placed at r-th position. In Figure 2 , we show a visualization of our approach to predict a permutation matrix with the LSTM.", "cite_spans": [ { "start": 10, "end": 31, "text": "Vinyals et al. (2015)", "ref_id": "BIBREF26" }, { "start": 177, "end": 211, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF10" }, { "start": 244, "end": 267, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 268, "end": 287, "text": "Luong et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 571, "end": 579, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "The LSTM's r-th hidden state h r \u2208 R D and memory cells c r \u2208 R D are computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h r , c r = \u1ebd, 0 (r = 0) F LSTM (e i r\u22121 , h r\u22121 , c r\u22121 ) (1 \u2264 r \u2264 N ) ,", "eq_num": "(5)" } ], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "where the function F LSTM is a state-update function and i r\u22121 \u2208 {1, . . . , N } denotes the index of the token w i r\u22121 that is placed at the previous posi-tion, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i r\u22121 = argmax c\u2208{1,...,N } p r\u22121,c .", "eq_num": "(6)" } ], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "Subsequently, we predict a selective distribution over the input tokens:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p r,c = exp(score(h r , e c )) N k=1 exp(score(h r , e k )) ,", "eq_num": "(7)" } ], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "where the scoring function score computes the confidence of placing the token w c at r-th position. We define the scoring function as a bilinear model as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(u, v) = u W s v \u2208 R.", "eq_num": "(8)" } ], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "where W s \u2208 R D\u00d7D denotes a learnable matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prediction of a Permutation Matrix", "sec_num": "2.3" }, { "text": "As the WON is designed to be fully differentiable, it can be trained with any gradient descent algorithms, such as RMSProp (Tieleman and Hinton, 2012) . Given a set of shuffled tokens X = {w 1 , . . . , w N }, we define a loss function as the following negative log likelihood:", "cite_spans": [ { "start": 123, "end": 150, "text": "(Tieleman and Hinton, 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Objective Function", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L(X ) = N r=1 \u2212 log p r,tr", "eq_num": "(9)" } ], "section": "Objective Function", "sec_num": "2.4" }, { "text": "where t r \u2208 {1, . . . , N } denotes the index of the ground-truth token that appears at r-th position in the original sentence. In other words, an ordered sequence w t 1 , w t 2 , . . . , w t N forms the original sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Objective Function", "sec_num": "2.4" }, { "text": "Among the most popular methods for learning word embeddings are the skip-gram (SG) model and the continuous bag-of-words (CBOW) of Mikolov et al. (2013a,b) , or the GloVe introduced by Pennington et al. (2014). These are formalized as simple log-bilinear models based on the inner product between two word vectors. The core idea is based on the distributional hypothesis (Harris, 1954; Firth, 1957) , stating that words appearing in similar contexts tend to have similar meanings. For example, SG and CBOW are trained by making predictions of bag-of-words contexts appearing in a fixed-size window around target words, and vice versa. Although word embeddings produced by these models have been shown to give improvements in a variety of downstream tasks, it still remains difficult for these models to learn syntactic word representations owing to their insensitivity to word order. As a consequence, word embeddings produced by these order-insensitive models are thus suboptimal for syntax-related tasks such as parsing (Andreas and Klein, 2014) . In contrast, our method mainly focuses on word order information and utilize it in the learning process. Ling et al. (2015b) introduced the structured skip-gram (SSG) model and the continuous window (CWindow) that extend SG and CBOW respectively. Let c be the window size. These models learn 2c context-embedding matrices to be aware of relative positions of context words in a window. The recent work of Trask et al. (2015) is also based on the same idea as SSG and CWindow. Ling et al. (2015a) proposed an approach to integrating an order-sensitive attention mechanism into CBOW, which allows for consideration of the contexts of words, and where the context words appear in a window. Bengio et al. (2003) presented a neural network language model (NNLM) where word embeddings are simultaneously learned along with a language model. One of the major shortcomings of these window-based approaches is that it is almost impossible to learn longer dependencies between words than the prefixed window size c. In contrast, the recurrent architecture allows the WON to take into account dependencies over an entire sentence. Mikolov et al. (2010) applied an RNN for language modeling (RNNLM), and demonstrated that the word embeddings learned by the RNNLM capture both syntactic and semantic regularities. The main shortcoming of the RNNLM is that it is very slow to train unfortunately. This is a consequence of having to predict the probability distribution over an entire vocabulary V , which is generally very large in the real world. In contrast, the WON predicts the probability distribution over entire sentences, whose length N is usually less than 50 |V |. In our preliminary experiments, we found that the computation time for one iteration (= forward + backward + parameter update) of the WON is about 4 times faster than that of the RNNLM (LSTMLM). Levy and Goldberg (2014) introduced dependency-based word embeddings.", "cite_spans": [ { "start": 131, "end": 155, "text": "Mikolov et al. (2013a,b)", "ref_id": null }, { "start": 371, "end": 385, "text": "(Harris, 1954;", "ref_id": "BIBREF9" }, { "start": 386, "end": 398, "text": "Firth, 1957)", "ref_id": "BIBREF7" }, { "start": 1022, "end": 1047, "text": "(Andreas and Klein, 2014)", "ref_id": "BIBREF0" }, { "start": 1155, "end": 1174, "text": "Ling et al. (2015b)", "ref_id": "BIBREF13" }, { "start": 1455, "end": 1474, "text": "Trask et al. (2015)", "ref_id": "BIBREF24" }, { "start": 1526, "end": 1545, "text": "Ling et al. (2015a)", "ref_id": "BIBREF12" }, { "start": 1737, "end": 1757, "text": "Bengio et al. (2003)", "ref_id": "BIBREF2" }, { "start": 2170, "end": 2191, "text": "Mikolov et al. (2010)", "ref_id": "BIBREF17" }, { "start": 2906, "end": 2930, "text": "Levy and Goldberg (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The method applies the skip-gram with negative sampling (SGNS) model (Mikolov et al., 2013b) to syntactic contexts derived from dependency parse-trees.", "cite_spans": [ { "start": 69, "end": 92, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "Their method heavily relies on pre-trained dependency parsers to produce words' relations for each sentence in training corpora, thus encountering error propagation problems. In contrast, our method only requires raw corpora, and our aim is to produce word embeddings that improve syntax-related tasks, such as parsing, without using any human annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "The WON can be interpreted as a simplification of the recently proposed pointer network (Ptr-Net) (Vinyals et al., 2015) . The main difference between the WON and the PtrNet is the encoder part. The PtrNet uses an RNN to encode an unordered set X = {w 1 , . . . , w N } sequentially, i.e.,", "cite_spans": [ { "start": 98, "end": 120, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e i = RNN enc (w i , e i\u22121 ).", "eq_num": "(10)" } ], "section": "Related Work", "sec_num": "3" }, { "text": "In contrast, the WON treats each symbol independently (Eq. 1) and aggregates them with a simpler function (Eq. 2). In the word ordering task, the order of X = (w 1 , . . . , w N ) is meaningless because X is an out-of-order set. Nonetheless, according to Equation 10, the vector e i depends on the input order of w 1 , . . . , w i\u22121 . Vinyals et al. (2015) evaluated the PtrNet on geometric sorting tasks (e.g., Travelling Salesman Problem) where each input w i forms a continuous vector that represents the cartesian coordinate of the point (e.g., a city). However, in the word ordering task, Equation 10 suffers from the data sparseness problem, as each input w i forms a high-dimensional discrete symbol.", "cite_spans": [ { "start": 335, "end": 356, "text": "Vinyals et al. (2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "4 Experimental Setting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "3" }, { "text": "We used the English Wikipedia corpus as the training corpus. We lowercased and tokenized all tokens, and then replaced all digits with \"7\" (e.g., \"ABC2017\"\u2192\"ABC7777\"). We built a vocabulary of the most frequent 300K words and replaced out-of-vocabulary tokens with a special \" UNK \" symbol. Subsequently, we appended special \" EOS \" symbols to the end of each sentence. The resulting corpus contains about 97 million sentences with about 2 billion tokens. We randomly extracted 5K sentences as the validation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Preprocessing", "sec_num": "4.1" }, { "text": "We set the dimensionality of word embeddings to 300. The dimensionality of the hidden states of the LSTM was 512. The L2 regularization term (called weight decay) was set to 4 \u00d7 10 \u22126 . For the stochastic gradient descent algorithm, we used the SMORMS3 (Func, 2015) , and the mini-batch size was set to 180.", "cite_spans": [ { "start": 253, "end": 265, "text": "(Func, 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Hyper Parameters", "sec_num": "4.2" }, { "text": "For a fair comparison, we trained the following order-insensitive/sensitive baselines on exactly the same pre-processed corpus described in Section 4.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 SGNS (Mikolov et al., 2013b) : We used the word2vec implementation in Gensim 2 to train the Skip-Gram with Negative Sampling (SGNS). We set the window size to 5, and the number of negative samples to 5.", "cite_spans": [ { "start": 7, "end": 30, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 GloVe (Pennington et al., 2014 ): GloVe's embeddings are trained by using the original implementation 3 provided by the authors. We set the window size to 15. In our preliminary experiments, we found that GloVe with a window size of 15 yields higher performances than that with a window size of 5.", "cite_spans": [ { "start": 8, "end": 32, "text": "(Pennington et al., 2014", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 SSG, CWindow (Ling et al., 2015b) : We built word embeddings by using the structured skip-gram (SSG) and the continuous window (CWindow). We used the original implementation 4 developed by the authors. The window size was 5, and the number of negative samples was 5.", "cite_spans": [ { "start": 15, "end": 35, "text": "(Ling et al., 2015b)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 LSTMLM: We also compared the proposed method with the RNNLM (Mikolov et al., 2010) with LSTM units (LSTMLM). The hyper parameters were the same with that of the WON except for the mini-batch size. We used a mini-batch size of 100 for the LSTMLM.", "cite_spans": [ { "start": 62, "end": 84, "text": "(Mikolov et al., 2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "In this experiment, we evaluated the learned word embeddings by using them as pre-trained features in supervised POS tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Part-of-Speech Tagging", "sec_num": "5" }, { "text": "Test Acc. (%) SGNS (Mikolov et al., 2013b) 96.76 GloVe (Pennington et al., 2014) 96.31 SSG (Ling et al., 2015b) 96.94 CWindow (Ling et al., 2015b) 96.78 LSTMLM 96.92 WON 97.04 Table 1 : Comparison results on part-of-speech tagging with different word embeddings. The dataset is the Wall Street Journal (WSJ) portion of the Penn Treebank (PTB) corpus. The evaluation metric is accuracy (%).", "cite_spans": [ { "start": 19, "end": 42, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF18" }, { "start": 55, "end": 80, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF20" }, { "start": 91, "end": 111, "text": "(Ling et al., 2015b)", "ref_id": "BIBREF13" }, { "start": 126, "end": 146, "text": "(Ling et al., 2015b)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation on Part-of-Speech Tagging", "sec_num": "5" }, { "text": "In POS tagging, every token in a sentence is classified into its POS tag (NN for nouns, VBD for past tense verbs, JJ for adjectives, etc.). We first used the learned word embeddings to project three successive tokens (w i\u22121 , w i , w i+1 ) in an input sentence to feature vectors (e i\u22121 , e i , e i+1 ) that are then concatenated and fed to a two-layer perceptron followed by a softmax function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised POS Tagger", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (c|w i\u22121 , w i , w i+1 ) = MLP([e i\u22121 ; e i ; e i+1 ]),", "eq_num": "(11)" } ], "section": "Supervised POS Tagger", "sec_num": "5.1" }, { "text": "where [\u2022 ; \u2022 ; \u2022] denotes vector concatenation. The classifier MLP predicts the probability distribution over POS tags of the center token w i . We put special padding symbols at the beginning and end of each sentence. The dimensionality of the hidden layer of the MLP was 300. The MLP classifier was trained via the SMORMS3 optimizer (Func, 2015) without updating the word embedding layer. We used the Wall Street Journal (WSJ) portion of the Penn Treebank (PTB) corpus 5 (Marcus et al., 1993) . We followed the standard section partition, which is to use sections 0-18 for training, sections 19-21 for validation, and sections 22-24 for testing. The dataset contains 45 tags. The evaluation metric was the word-level accuracy. ", "cite_spans": [ { "start": 335, "end": 347, "text": "(Func, 2015)", "ref_id": "BIBREF8" }, { "start": 473, "end": 494, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised POS Tagger", "sec_num": "5.1" }, { "text": "In this experiment, as in Section 5, we evaluated the learned word embeddings on supervised dependency parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Dependency Parsing", "sec_num": "6" }, { "text": "Dependency parsing aims to identify syntactic relations between token pairs in a sentence. We used Stanford's neural network dependency parser (Chen and Manning, 2014) 6 , whose word embeddings were initialized with the learned word embeddings. We followed all the default settings except for the word embedding size (embeddingSize = 300) and the number of training iterations (maxIter = 6000).", "cite_spans": [ { "start": 143, "end": 167, "text": "(Chen and Manning, 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Dependency Parser", "sec_num": "6.1" }, { "text": "We used the WSJ portion of the PTB corpus and followed the standard splits of sections 2-21 for training, 22 for validation, and 23 for testing. We converted the treebank corpus to Stanford style dependencies using the Stanford converter. The parsing performances were evaluated in terms of Unlabeled Attachment Score (UAS) and Labeled Attachment Score (LAS). Table 2 shows the results of the different word embeddings on dependency parsing. First we observe that the WON consistently outperforms the baselines on both UAS and LAS. Next, by comparing the unlimited-context models (WON and LSTMLM) with the limited-context models (SGNS, GloVe, SSG, CWindow), we can notice that the former give higher parsing scores than the latter. These results are reasonable because the former can learn arbitrary-length syntactic dependencies between words without constraints from the fixed-size window size based on which the limited-window models are trained.", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 367, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Supervised Dependency Parser", "sec_num": "6.1" }, { "text": "In various NLP tasks, both syntactic and semantic features can benefit performances. To enrich our syntax-oriented word embeddings with semantic information, in this section, we adopt a simple fine-tuning technique and verify its effectiveness. More precisely, we first initialize the word embeddings W with pre-trained parameters W sem produced by a semantics-oriented model such as the SGNS. Subsequently we add the following penalty term to the loss function in Equation 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fusion with Semantic Features", "sec_num": "7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb W \u2212 W sem 2 F ,", "eq_num": "(12)" } ], "section": "Fusion with Semantic Features", "sec_num": "7" }, { "text": "where \u03bb \u2208 R is a hyper parameter to control the intensity of the penalty term in the learning process, and \u2022 2 F is the Frobenius norm. This term attempts to keep the word embeddings W close to the semantic representations W sem while minimizing the syntax-oriented objective on the word ordering task. In our experiments, we used the SGNS's embeddings as W sem and set \u03bb to 1. The SGNS was trained as explained in Section 4.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fusion with Semantic Features", "sec_num": "7" }, { "text": "In this section, we quantitatively evaluated the WON with the above fine-tuning technique on two major benchmarks: (1) word analogy task, and (2) word similarity task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fusion with Semantic Features", "sec_num": "7" }, { "text": "The word analogy task has been used in previous work to evaluate the ability of word embeddings to represent semantic and syntactic regularities. In this experiment, we used the word analogy dataset produced by Mikolov et al. (2013a) . The dataset consists of questions like \"A is to B what C is to ?,\" denoted as \"A : B :: C : ?.\" The dataset contains about 20K such questions, divided into a syntactic subset and a semantic subset. The syntactic subset contains nine question types, such as adjective-to-adverb and opposite, while the semantic subset contains five question Table 3 : Results on the word analogy task (Mikolov et al., 2013a) with different word embeddings. The first upper block presents the results on nine syntactic question types. In the lower block we show the results on five semantic question types. The last row presents the total score. The evaluation metric is accuracy (%).", "cite_spans": [ { "start": 211, "end": 233, "text": "Mikolov et al. (2013a)", "ref_id": "BIBREF16" }, { "start": 619, "end": 642, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 576, "end": 583, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "types such as city-in-state and family.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "Suppose that a vector e w is a representation of a word w, and is normalized to unit norm. Following a previous work (Mikolov et al., 2013a ), we answer an analogy question \"A : B :: C : ?\" by finding a word w * that has the closest representation to (e B \u2212 e A + e C ) in terms of cosine similarity, i.e.,", "cite_spans": [ { "start": 117, "end": 139, "text": "(Mikolov et al., 2013a", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "w * = argmax w\u2208V \\{A,B,C}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "(e B \u2212 e A + e C ) e w e B \u2212 e A + e C ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "where V denotes the vocabulary. The evaluation was performed using accuracy, which denotes the percentage of words predicted correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "In Table 3 , we report the results of the different word embeddings on this task. As can be seen in the Table 3 , the WON outperforms the baselines on four out of nine syntactic question types, and tends to yield higher accuracies by a large margin than the baselines except for the SSG. Our method and the SSG totally give the best performances on seven of nine syntactic question types. This tendency, as in Section 5.2, indicates that word order information is crucial to learn syntactic word embeddings. In regard to semantics, the WON achieves the best scores on three out of five semantic question types. Interestingly, on two semantic question types (capital-common and city), the WON outperforms the SGNS that was used to Table 4 : Results on the word similarity task with different word embeddings. Spearman's rank correlation coefficents (%) are computed on three datasets: WS-353, MC, and RG. initialize our word embeddings. This result implies that the word ordering task has the potential to improve not only syntactic but also semantic features. Our method achieves the highest accuracy on the overall score.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": null }, { "start": 104, "end": 111, "text": "Table 3", "ref_id": null }, { "start": 730, "end": 737, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Word Analogy", "sec_num": "7.1" }, { "text": "The word similarity benchmark is commonly used to evaluate word embeddings in terms of distributional semantic similarity. The word similarity datasets consist of triplets like (w 1 , w 2 , s), where s \u2208 R is a human-annotated similarity score between two words (w 1 , w 2 ). In this task, we compute cosine similarity between two word embeddings. The evaluation is performed with the Spearman's rank correlation coefficient between Table 5 : Query words and their most similar words. Cosine similarities are computed between their embeddings produced by the WON.", "cite_spans": [], "ref_spans": [ { "start": 433, "end": 440, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "7.2" }, { "text": "the human-annotated similarities and the computed similarities. Table 4 presents the results on three datasets: WordSim-353 (Finkelstein et al., 2001 ), MC (Miller and Charles, 1991) , and RG (Rubenstein and Goodenough, 1965). we observe that the WON gives a slightly higher performance than the baselines on the MC dataset. On the other datasets, the SSG yields the best performances. These results are interesting because the two models rely on word order information while the word similarity task originally focuses on topical semantic similarities between words.", "cite_spans": [ { "start": 124, "end": 149, "text": "(Finkelstein et al., 2001", "ref_id": "BIBREF6" }, { "start": 156, "end": 182, "text": "(Miller and Charles, 1991)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Word Similarity", "sec_num": "7.2" }, { "text": "Further investigation into the interaction between syntactic and semantic representations would be interesting and needs to be explored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Similarity", "sec_num": "7.2" }, { "text": "In this section, we inspect the learned vector space by computing the similarities between word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "8" }, { "text": "In this experiment we trained the WON on the BookCorpus (Zhu et al., 2015) that is preprocessed in the same way described in Section 4.1. The BookCorpus consists of a large collection of nov-els, which results in a grammatically sophisticated text corpus that would be suitable for qualitative analysis. Note that to clearly investigate the word embeddings produced by the WON we neither initialize our word embeddings with other models nor use fine-tuning techniques, as in experiments on downstream syntax-related tasks (Section 5 and Section 6). We choose queries focusing on (1) declension of personal pronouns, (2) singular and plural forms of nouns, (3) verb conjugation, (4) comparative/superlative forms of adjectives, and (5) prepositions. Table 5 presents some representative queries for (1)-(5) and their respective most similar words in the learned vector space. First we can observe that our word embeddings produce a continuous vector space that successfully captures syntactic regularities. In addition to the syntactic regularities, interestingly, we found that the WON prefers to gather words in terms of those meanings or semantic categories.", "cite_spans": [ { "start": 56, "end": 74, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 749, "end": 756, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "8" }, { "text": "The research question we explored in this study was how to learn syntactic word embeddings without using any human annotations. Our underlying hypothesis is that the word odering task is suitable for obtaining syntactic knowledge about words. To verify this idea, we developed the WON, which implicitly learns syntactic word representations through learning to explicitly solve the word ordering task. The experimental results demonstrate that the WON gives improvements over baselines particularly on syntax-related tasks, such as partof-speech tagging and dependency parsing. We can also observe that the WON, by combined with a simple fine-tuning technique, has the potential to refine not only syntactic but also semantic features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "It remains unclear how well order-sensitive models like the WON can learn syntactic knowledge about words in languages other than English. Especially, it is interesting to investigate cases on languages with richer morphology and freer word order. We leave this to future work. discussion. This work was supported by JSPS KAKENHI Grant Number 16H05872 and JST CREST JPMJCR1304.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "https://radimrehurek.com/gensim/ 3 http://nlp. stanford.edu/projects/glove/ 4 https://github.com/wlin12/wang2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the LDC99T42 Treebank release 3 version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlp.stanford.edu/software/nndep.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for their constructive and helpful suggestions on this work. We also thank Makoto Miwa and Naoaki Okazaki for valuable comments and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "How much do word embeddings encode about syntax?", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas and Dan Klein. 2014. How much do word embeddings encode about syntax? In Pro- ceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3(Feb):1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Finding structure in time", "authors": [ { "first": "Jeffrey", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive Science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Placing search in context: The concept revisited", "authors": [ { "first": "Lev", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Rivlin", "suffix": "" }, { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "Gadi", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 10th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th Inter- national Conference on World Wide Web.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A synopsis of linguistic theory", "authors": [ { "first": "John", "middle": [ "R" ], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "1930--1955", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Firth. 1957. A synopsis of linguistic theory, 1930-1955. Blackwell.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Smorms3 -blog entry: Rmsprop loses to smorms3", "authors": [ { "first": "Simon", "middle": [], "last": "Func", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Func. 2015. Smorms3 -blog entry: Rm- sprop loses to smorms3 -beware the epsilon! http://sifter.org/ simon/journal/20150420.html.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Distributional structure. Word", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S. Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dependencybased word embeddings", "authors": [ { "first": "Over", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Over Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Not all contexts are created equal: Better word representations with variable attention", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Chu-Cheng", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, and Sil- vio Amir. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proceedings of the 2015 Conference of Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Two/too simple adaptation of word2vec for syntax problems", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Black", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Chris Dyer, Alan Black, and Isabel Trancoso. 2015b. Two/too simple adaptation of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a large annotated corpus of english: The penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "Proceedings of INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceed- ings of INTERSPEECH.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Walter", "middle": [ "G" ], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and Cognitive Processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller and Walter G. Charles. 1991. Con- textual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1-28.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Glove: Global vectors for word representations", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Huang", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi-supervised recursive autoencoders for predict- ing sentiment distributions. In Proceedings of the conference on empirical methods in natural lan- guage processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "authors": [ { "first": "Tijmen", "middle": [], "last": "Tieleman", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running av- erage of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modeling order in neuralword embeddings at scale", "authors": [ { "first": "Andrew", "middle": [], "last": "Trask", "suffix": "" }, { "first": "David", "middle": [], "last": "Gilmore", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2015, "venue": "Proceedings of The 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Trask, David Gilmore, and Matthew Russell. 2015. Modeling order in neuralword embeddings at scale. In Proceedings of The 32nd International Conference on Machine Learning.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.06724" ] }, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watch- ing movies and reading books. arXiv preprint arXiv:1506.06724.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "text": "presents the comparison of the WON to the other baselines on the test split. The results demonstrate that the WON gives the highest performance, which supports our hypothesis that the word ordering task is effective for acquiring syntactic knowledge about words. We alsoDev Test UAS LAS UAS LAS SGNS 91.56 90.09 91.11 89.89 GloVe 88.87 87.09 88.28 86.61 SSG 91.11 89.60 90.93 89.43 CWindow 91.23 89.69 91.16 89.67 LSTMLM 91.83 90.34 91.49 90.08 WON 91.92 90.49 91.82 90.38", "html": null, "type_str": "table", "content": "" }, "TABREF1": { "num": null, "text": "", "html": null, "type_str": "table", "content": "
: Results on dependency parsing with dif-
ferent word embeddings. The dataset was the WSJ
portion of the PTB corpus. The evaluation metrics
were Unlabeled Attachment Score (UAS) and La-
beled Attachment Score (LAS).
observe that the order-sensitive methods (WON,
LSTMLM, and SSG) tend to outperform the order-
insensitive methods (SGNS and GloVe), which in-
dicates that, as we expect, word order information
is crucial for learning syntactic word embeddings.
" } } } }