ACL-OCL / Base_JSON /prefixI /json /insights /2021.insights-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
76.1 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:21.680030Z"
},
"title": "Corrected CBOW Performs as well as Skip-gram",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "\u0130rsoy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {}
},
"email": "oirsoy@bloomberg.net"
},
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {}
},
"email": "abenton10@bloomberg.net"
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {}
},
"email": "karl.stratos@rutgers.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has been reported in subsequent works. We find that these observations are driven not by fundamental differences in their training objectives, but more likely on faulty negative sampling CBOW implementations in popular libraries such as the official implementation, word2vec.c, and Gensim. We show that after correcting a bug in the CBOW gradient update, one can learn CBOW word embeddings that are fully competitive with SG on various intrinsic and extrinsic tasks, while being many times faster to train.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Mikolov et al. (2013a) observed that continuous bag-of-words (CBOW) word embeddings tend to underperform Skip-gram (SG) embeddings, and this finding has been reported in subsequent works. We find that these observations are driven not by fundamental differences in their training objectives, but more likely on faulty negative sampling CBOW implementations in popular libraries such as the official implementation, word2vec.c, and Gensim. We show that after correcting a bug in the CBOW gradient update, one can learn CBOW word embeddings that are fully competitive with SG on various intrinsic and extrinsic tasks, while being many times faster to train.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pre-trained word embeddings are a standard way to boost performance on many tasks of interest such as named-entity recognition (NER), where contextual word embeddings such as BERT (Devlin et al., 2019) are computationally expensive and may only yield marginal gains. Word2vec (Mikolov et al., 2013a ) is a popular method for learning word embeddings because of its scalability and robust performance. There are two main Word2vec training objectives: (1) continuous bag-of-words (CBOW) that predicts the center word by averaging context embeddings, and (2) Skip-gram (SG) that predicts context words from the center word. 1 It has been frequently observed that while CBOW is fast to train, it lags behind SG in performance. 2 This observation is made by the inventors of Word2vec themselves (Mikolov, 2013) , and also independently in subsequent works (Pennington et al., 2014; Stratos et al., 2015) . This result is surprising since the CBOW and SG objectives lead to very similar weight updates. This is also contrary to the enormous success of contextual word embeddings based on masked language modeling (MLM), as CBOW follows a rudimentary form of MLM (i.e., predicting a masked target word from a context window without any perturbation of the masked word).",
"cite_spans": [
{
"start": 180,
"end": 201,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 276,
"end": 298,
"text": "(Mikolov et al., 2013a",
"ref_id": "BIBREF10"
},
{
"start": 621,
"end": 622,
"text": "1",
"ref_id": null
},
{
"start": 790,
"end": 805,
"text": "(Mikolov, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 851,
"end": 876,
"text": "(Pennington et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 877,
"end": 898,
"text": "Stratos et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we find that the performance discrepancy between CBOW and SG embeddings is founded less on theoretical differences in their training objectives but more on faulty CBOW implementations in standard libraries such as the official implementation word2vec.c (Mikolov et al., 2013b) , Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) and fastText (Bojanowski et al., 2017) . Specifically, we find that in these implementations, the gradient for source embeddings is incorrectly multiplied by the context window size, resulting in incorrect weight updates and inferior performance.",
"cite_spans": [
{
"start": 267,
"end": 290,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF11"
},
{
"start": 300,
"end": 325,
"text": "(\u0158eh\u016f\u0159ek and Sojka, 2010)",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 364,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the following contributions. First, we show that our correct implementation of CBOW indeed yields word embeddings that are fully competitive with SG while being trained in less than a third of the time (e.g., with a single machine it takes less than 73 minutes to train CBOW embeddings on the entire Wikipedia corpus). We present experiments on intrinsic word similarity and analogy tasks, as well as extrinsic eval uations on the SST-2, QNLI, and MNLI tasks from the GLUE benchmark (Wang et al., 2018) , and the CoNLL03 English named entity recognition task (Sang and De Meulder, 2003) .",
"cite_spans": [
{
"start": 491,
"end": 510,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 567,
"end": 594,
"text": "(Sang and De Meulder, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, we make our implementation, k\u014dan, 2 publicly available. 3 With this implementation, it is possible to train 768-dimensional CBOW embeddings on one epoch of English C4 (Raffel et al., 2020) in 1.61 days on a single 16 CPU machine. 4",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The parameters of CBOW are two sets of word embeddings: \"source-side\" and \"target-side\" vectors v w , v w \u2208 R d for every word type w \u2208 V in the vocabulary. A window of text in a corpus consists of a center word w O and context words w 1 . . . w C . For instance, in the window the dog laughed, we have w O = dog and w 1 = the and w 2 = laughed. Given a window of text, the CBOW loss is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "v c = 1 C C j=1 v w j L = \u2212 log \u03c3(v w O v c ) \u2212 k i=1 log \u03c3(\u2212v n i v c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "where n 1 . . . n k \u2208 V are negative examples drawn iid from some noise distribution P n over V . The gradients of L with respect to the target (v w O ), negative target (v n i ), and average context source (v c ) embeddings are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202L \u2202v w O =(\u03c3(v w O v c ) \u2212 1)v c \u2202L \u2202v n i =\u03c3(v n i v c )v c \u2202L \u2202v c =(\u03c3(v w O v c ) \u2212 1)v w O + k i=1 \u03c3(v n i v c )v n i",
"eq_num": "(1)"
}
],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "and by the chain rule with respect to a source context embedding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202L \u2202v w j = 1 C [(\u03c3(v w O v c ) \u2212 1)v w O + k i=1 \u03c3(v n i v c )v n i ]",
"eq_num": "(2)"
}
],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "However, the CBOW negative sampling implementations in word2vec.c 5 , Gensim 6 , and fastText 7 incorrectly update each context vector, v w j , by Eq. (1), without normalizing by the number of context words, given in Eq. (2). In fact, this error has been pointed out in several Gensim issues as well as a fastText issue. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "Why This Error Matters Aside from being incorrect, this update matters for two reasons. First, in both Gensim and word2vec.c, the width of the context window is randomly sampled from 1, . . . , C max for every target word. This means that source embeddings which were averaged over wider context windows will experience a larger update than their contribution, relative to source embeddings averaged over narrower windows. Second, in the incorrect gradient, \u2202L \u2202v is incorrectly scaled by C, while \u2202L \u2202v is not; the correct stochastic gradient with respect to all embeddings actually points in a different direction than what was implemented in word2vec.c. Section 5 touches on both of these issues in more detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CBOW Implementation",
"sec_num": "2"
},
{
"text": "We evaluated CBOW and SG embeddings under Gensim and our implementations (both corrected and original CBOW). Unless otherwise noted, we learn word embeddings on the entire Wikipedia corpus PTB sentence split and tokenized with default settings, where words occurring fewer than ten times were dropped. Training hyperparameters were held fixed for each set of embeddings: negative sampling rate 5, maximum context window of 5 words, number of training epochs 5, and embedding dimensionality 300 unless otherwise noted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "5 https://github.com/tmikolov/word2vec/blob/ 20c129af10659f7c50e86e3be406df663beff438/ word2vec.c#L483",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "6 https://github.com/ RaRe-Technologies/gensim/blob/ a93067d2ea78916cb587552ba0fd22727c4b40ab/ gensim/models/word2vec_inner.pyx#L455-L456.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "7 https://github.com/ facebookresearch/fastText/blob/ a20c0d27cd0ee88a25ea0433b7f03038cd728459/ src/model.cc#L85 In fastText, the normalization option is guarded by a boolean flag, but it defaults to false for CBOW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "8 https://github.com/RaRe-Technologies/ gensim/issues/1873, https://github.com/ RaRe-Technologies/gensim/issues/697, https: //github.com/facebookresearch/fastText/issues/ 910.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We found that the default initial learning rate in Gensim, 0.025, learned strong SG embeddings. However, we swept separately for CBOW initial learning rate for Gensim and our implementation. We swept over initial learning rate in {0.025, 0.05, 0.075, 0.1} selecting 0.025 to learn Gensim CBOW embeddings and 0.075 for corrected CBOW, to maximize average performance on the development fold (random partition of 50% of examples) of intrinsic evaluation tasks described in 3.1. We selected learning rate as this was a critical hyperparameter, and we found that CBOW embeddings learned with 0.075 learning rate with Gensim suffered compared to learning with a low learning rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We evaluated each set of embeddings intrinsically on the MEN, WS353, MIXED, and SYNT word similarity and analogy tasks, as described in Levy and Goldberg (2014) . We also evaluate on the Stanford rare words analogy task, RW (Luong et al., 2013) . Analogy tasks were evaluated by top-1 accuracy and similarity tasks using Spearman's rank correlation coefficient.",
"cite_spans": [
{
"start": 136,
"end": 160,
"text": "Levy and Goldberg (2014)",
"ref_id": "BIBREF7"
},
{
"start": 224,
"end": 244,
"text": "(Luong et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1"
},
{
"text": "We also evaluated embeddings extrinsically on the SST-2, QNLI, and MNLI GLUE tasks, following the methodology in Wang et al. (2018) and use the frozen pre-trained word embeddings in a BiLSTM classifier. In addition, we also evaluated word embeddings in a CoNLL03 English named entity recognition (NER) sequence tagger (Sang and De Meulder, 2003) . For NER, we use a single-layer BiLSTM with 256-dimensional hidden layer as the sequence tagger, and evaluate under frozen and finetuned word embedding settings. See Appendix C for model selection details in extrinsic evaluation.",
"cite_spans": [
{
"start": 113,
"end": 131,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 318,
"end": 345,
"text": "(Sang and De Meulder, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.1"
},
{
"text": "Intrinsic Evaluation CBOW embeddings trained with our implementation achieve similar intrinsic performance to SG embeddings, surpassing SG embeddings for the RW and SYNT tasks ( Table 1) . On the other hand, CBOW embeddings trained by Gensim or our incorrect implementation achieve much worse performance than SG. We also report performance of CBOW embeddings, with and without the corrected update, trained for one epoch on the entirety of the English C4 dataset (768 dimensions and 5 million word vocabulary). These embeddings took 1.61 days per epoch to train on a single 16-core machine.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Extrinsic Evaluation Development and test set performance on GLUE tasks for each set of embeddings is reported in Table 2 . We also compare performance against 300-dimensional GloVe embeddings (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword 5 and random embeddings drawn from a standard normal distribution as baselines. Table 3 contains CoNLL 2003 NER performance for each set of word embeddings. We used the same set of hyperparameters sampled in the GLUE evaluation for model selection (dev F1 as computed by the CoNLL evaluation script). NER test performance of corrected CBOW embeddings is within 1% F1 to SG embeddings, whereas Gensim CBOW embeddings suffer up to 4% F1 test performance relative to SG.",
"cite_spans": [
{
"start": 193,
"end": 218,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 337,
"end": 344,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The Gensim-trained CBOW embeddings achieve better test accuracy on SST-2 despite achieving similar performance on the development set as corrected CBOW embeddings. Gensim CBOW embeddings also outperform on QNLI. However, performance on MultiNLI is slightly different. In this case, corrected CBOW embeddings are almost 2% more accurate than Gensim embeddings when evaluated on the out of domain test set. For these GLUE tasks, it is unclear whether CBOW embeddings learned with our implementation are universally more performant than those learned by Gensim. We hypothesize this may be due to the susceptibility of models overfitting to the development set (especially since our classifier has 1024 dimensional hidden layers).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Aside from being incorrect, the faulty update leads to two major issues during training: (1) the norm of the source side vectors grows as a function of context width, and (2) despite being a descent direction, the faulty update diverges from the SGD direction as the number of negative samples increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems with Faulty CBOW",
"sec_num": "5.1"
},
{
"text": "Because the effective learning rate for source embeddings is greater than that for the source embeddings, one would expect the magnitude of the source embeddings to also grow larger. Growing source norms can be a problem during learning, as larger embedding norms can easily lead to saturated sigmoid activations, effectively halting learning. See Appendix D for an empirical analysis of embedding magnitude.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Norm Grows with Context",
"sec_num": null
},
{
"text": "The faulty CBOW update for source embeddings is a stochastic 9 multiple of the correct update. However, CBOW learns both source and target embeddings, which means that the update direction no longer follows the SGD update direction. Even though the faulty update is still a descent direction, we can analyze how the angle between it and the true SGD direction changes as a function of context width. If we make the simplifying assumption that the norm of 9 Context width is sampled uniformly at random between 1 and maximum context width in standard word2vec implementations. the source and target gradients are equal, then one can derive the cosine similarity between \u03b4\u03b8 (true gradient) and \u03b4\u03b8 (faulty gradient) with respect to CBOW parameters \u03b8 as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cos( \u03b4\u03b8, \u03b4\u03b8) = C 2 + k + 1 (C 3 + k + 1)(C + k + 1)",
"eq_num": "(3)"
}
],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "Where C is the number of context tokens and k is the number of negative samples. For a moderate context width, as the number of negative examples increases, the faulty update points further away from the true gradient (Figure 1) . See Appendix E for the full derivation. Figure 2 : Average negative sampling loss per token for every batch of 5 million tokens for a single epoch of CBOW training on Wikipedia. The shaded region corresponds to the 95% bootstrapped confidence interval over average token loss on 100K token batches.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 228,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
},
{
"start": 271,
"end": 279,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "In practice, we found performance of CBOW embeddings to only decay moderately as a function of number of negatives, regardless of correction to the update, consistent with that reported in Pennington et al. (2014) . Nevertheless, we find that our correct CBOW implementation decreases training loss more quickly than the incorrect update, even in the typical setting of 5 negative samples and context width of 5 ( Figure 2) . Ultimately, increased sensitivity to hyperparameters is not a concern under typical training regimes.",
"cite_spans": [
{
"start": 189,
"end": 213,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 414,
"end": 423,
"text": "Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "FastText Although we do not investigate fastText specifically in this work, we do note in Section 2 that fastText CBOW defaults to using the same incorrect source embedding gradient. Sent2vec (Gupta et al., 2019 ) is also exposed to this bug, since it builds on fastText. In spite of this, fastText embeddings trained with the CBOW objective were found to outperform SG across multiple languages (Grave et al., 2018) . To achieve better performance of CBOW, Grave et al. (2018) tuned hyperparameters and trained on a web-scale corpus (enabled by the faster CBOW training).",
"cite_spans": [
{
"start": 192,
"end": 211,
"text": "(Gupta et al., 2019",
"ref_id": "BIBREF6"
},
{
"start": 396,
"end": 416,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 458,
"end": 477,
"text": "Grave et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "FastText also represents each word vector as the sum of vectors of its constituent character n-grams, and includes positional embeddings as part of the training procedure. In this work, we only consider the original CBOW and SG models described in Mikolov et al. (2013a) and hold hyperparameters and training set fixed between models whenever possible. We leave investigation of whether correcting this gradient bug could further improve fastText embeddings as future work.",
"cite_spans": [
{
"start": 248,
"end": 270,
"text": "Mikolov et al. (2013a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Update Worsens with More Negatives",
"sec_num": null
},
{
"text": "Before the widespread adoption of automatic differentiation libraries, it was the modeler's responsibility to derive the correct gradient for updating model weights. This word2vec.c gradient bug could have been avoided if a finite difference check was run to ensure the derivation of the stochastic gradient was correct. These checks are standard practice (Bengio, 2012; Bottou, 2012) , and we strongly encourage other researchers to use finite difference gradient checks to verify the correctness of manually derived gradients. According to Bottou (2012) : \"When the computation of the gradients is slightly incorrect, stochastic gradient descent often works slowly and erratically... It is not uncommon to discover such bugs in SGD code that has been quietly used for years.\"",
"cite_spans": [
{
"start": 356,
"end": 370,
"text": "(Bengio, 2012;",
"ref_id": "BIBREF0"
},
{
"start": 371,
"end": 384,
"text": "Bottou, 2012)",
"ref_id": "BIBREF2"
},
{
"start": 542,
"end": 555,
"text": "Bottou (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We find that CBOW can learn embeddings that are as performant as SG, when trained with the correct update, allowing efficient training of strong word embeddings on web-scale datasets on an academic budget. We release our implementation, k\u014dan, along with trained C4 CBOW embeddings at https://github.com/ bloomberg/koan. A Versions of Gensim and word2vec.c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We compare against the Gensim implementation 10 at commit a93067d2ea78916cb587552ba0fd22727c4b40ab and the word2vec.c implementation 11 at commit e092540633572b883e25b367938b0cca2cf3c0e7 which are the most recent commits at the time of writing. Figure 3 shows the time to train each set of embeddings as a function of number of worker threads on the tokenized Wikipedia corpus. All implementations were benchmarked on an Amazon EC2 c5.metal instance (96 logical processors, 196GB memory) using time, and embeddings were trained with a minimum token frequency of 10 and downsampling frequency threshold of 10 \u22123 . We benchmark our implementation with a buffer of 100,000 sentences. We trained CBOW for five epochs and SG for a single epoch, as it is much slower. Gensim seems to have trouble exploiting more worker threads when training CBOW, and both word2vec.c and our implementation learn embeddings faster. word2vec.c achieves slightly better scaling than our implementation during CBOW training (68m46s vs. 72m57s for 48 threads), although ours is faster with fewer workers (e.g., 559m0s vs. 284m58s for 4 threads and 88m28s vs. 72m29s for 32 threads). Although we did not comprehensively benchmark SG training for 5 epochs, we found that training SG embeddings for 5 epochs with 48 threads took us 3.31 times as long as training CBOW.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 253,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "10 https://github.com/RaRe-Technologies/ gensim/blob/develop/gensim/models/word2vec.py 11 https://github.com/tmikolov/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Training time",
"sec_num": null
},
{
"text": "We evaluate a given set of embeddings on GLUE tasks by using the frozen pre-trained word embeddings in a neural classifier. Words that were out of vocabulary are represented with a zero embedding vector. We use a two layer BiLSTM with 1024 hidden layer width, and a final projection layer. For the NLI tasks, we encode each sequence with an independent BiLSTM encoder. If the embeddings of sequence 1 and 2 are v and w, respectively, then a prediction is made using a multilayer perceptron over [v; w; v \u2212 w; v w] with a single 512 unit hidden layer. This is identical to the NLI classifier described in Wang et al. (2018) . We performed a random search over dropout rate \u2208 [0.0, 0.7] and learning rate \u2208 [10 \u22126 , 10 \u22122 ] with a budget of ten runs. All models are trained for up to 100 epochs with a patience of 2 epochs for early stopping. The same set of sampled hyperparameters was used for model selection for each set of word embeddings.",
"cite_spans": [
{
"start": 495,
"end": 513,
"text": "[v; w; v \u2212 w; v w]",
"ref_id": null
},
{
"start": 604,
"end": 622,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Extrinsic Evaluation Hyperparameters",
"sec_num": null
},
{
"text": "If we train embeddings using the correct implementation of CBOW, the l 2 norms of target and source vectors are unaffected by the width of the context window (Figure 4 ). With the faulty implementation Equation (1), the source vector norms increase as a function of the maximum context window width. Growing source norms can be a problem during learning, as larger embedding norms can easily lead to saturated sigmoid activations, effectively halting learning. This problem is further exacerbated by fast implementations of CBOW and SG that approximate the sigmoid activation function by a piecewise linear function. In these approximations, when the logit is above or below some threshold (e.g., 6 and -6 for word2vec.c) and the prediction agrees with ground truth, the gradient for this example is not back-propagated at all.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 167,
"text": "(Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "D CBOW Embedding Norm",
"sec_num": null
},
{
"text": "Relationship to correct CBOW update Although the incorrect CBOW update for source and target embeddings points in a different direction than the SGD update, this is still a descent direction for the CBOW loss. In other words, taking a sufficiently small step in this direction will reduce the stochastic loss. This can be easily seen by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Analysis of the Incorrect CBOW Gradient",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = CI dC 0 0 I d(k+1) \u03b4v =[ \u2202L \u2202v w 1 ; . . . ; \u2202L \u2202v w C ] \u03b4v =[ \u2202L \u2202v n 1 ; . . . ; \u2202L \u2202v n k ; \u2202L \u2202v w O ] \u03b4\u03b8 =[ \u2202L \u2202v ; \u2202L \u2202v ] \u03b4\u03b8 =D (\u03b4\u03b8)",
"eq_num": "(4)"
}
],
"section": "E Analysis of the Incorrect CBOW Gradient",
"sec_num": null
},
{
"text": "where d is the word embedding dimensionality, D is a diagonal matrix that scales the gradient with respect to each context word by C, and leaves the gradient with respect to the negative sampled target words and true target word unchanged. \u03b4\u03b8 and \u03b4\u03b8 denote the correct and incorrect gradients of the loss, respectively, with respect to source and target embeddings. Since all diagonal entries on D are strictly positive, D is positive definite. Therefore, if \u03b4\u03b8 > 0 then (\u2212 \u03b4\u03b8) T \u03b4\u03b8 < 0 and \u2212 \u03b4\u03b8 is a descent direction for the stochastic loss (Boyd and Vandenberghe, 2004) .",
"cite_spans": [
{
"start": 543,
"end": 572,
"text": "(Boyd and Vandenberghe, 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E Analysis of the Incorrect CBOW Gradient",
"sec_num": null
},
{
"text": "Even though the incorrect negative gradient is guaranteed to be a descent direction, the angle between this descent direction and the negative gradient can be influenced by the number of negative samples and the number of context words to average over. For the sake of simplicity, suppose that the gradient of the loss with respect to each source and target embedding has the same norm: \u2200j \u2208 {1, . . . , C}, i \u2208 {1, . . . , k} : \u2202L \u2202vw j 2 2 = \u2202L \u2202v n i 2 2 = \u03b1. Then the cosine similarity between the incorrect and correct gradient can be written as: cos( \u03b4\u03b8, \u03b4\u03b8) = ( \u03b4\u03b8) T \u03b4\u03b8 \u03b4\u03b8 2 \u03b4\u03b8 2 = (C 2 + k + 1)\u03b1 (C 3 + k + 1)\u03b1 (C + k + 1)\u03b1 = (C 2 + k + 1) (C 3 + k + 1)(C + k + 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Analysis of the Incorrect CBOW Gradient",
"sec_num": null
},
{
"text": "We can better understand how the angle between the incorrect and correct gradient varies as a function of number of context words and negative samples by looking at a plot of this function (Figure 1 ). What this plot makes clear is that for a moderate number of context words (around 10, which corresponds to a maximum context window of 5), the incorrect stochastic gradient can differ significantly from the true gradient as the number of negative samples increases. However, for sensible settings of C, k \u2264 20, the minimum cosine similarity is 0.68, achieved by C = 9 and k = 20, with 0.82 cosine similarity with C = 5 and k = 5 (typical for Word2vec training). Because of this, the CBOW update bug may have gone unnoticed. If one were to sample a large number of negatives, then the bug may have been more apparent.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 198,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "E Analysis of the Incorrect CBOW Gradient",
"sec_num": null
},
{
"text": "In this work, we always use the negative sampling formulations of Word2vec objectives which are consistently more efficient and effective than the hierarchical softmax formulations.2 SG requires sampling negative examples from every word in context, while CBOW requires sampling negative examples only for the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/bloomberg/koan 4 See Appendix B for training time benchmarks of popular Word2vec implementations against k\u014dan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Practical recommendations for gradient-based training of deep architectures",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "Neural networks: Tricks of the trade",
"volume": "",
"issue": "",
"pages": "437--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, pages 437-478. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vec- tors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Stochastic gradient descent tricks",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2012,
"venue": "Neural networks: Tricks of the trade",
"volume": "",
"issue": "",
"pages": "421--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 2012. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421-436. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convex optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Lieven",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vandenberghe",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen P Boyd and Lieven Vandenberghe. 2004. Convex optimization.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c9douard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tom\u00e1\u0161 Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Con- ference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Better word embeddings by disentangling contextual n-gram information",
"authors": [
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Pagliardini",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "933--939",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Prakhar Gupta, Matteo Pagliardini, and Martin Jaggi. 2019. Better word embeddings by dis- entangling contextual n-gram information. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 933-939, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in neural information processing sys- tems, pages 2177-2185.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Better word representations with recursive neural networks for morphology",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "104--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Richard Socher, and Christo- pher D Manning. 2013. Better word representa- tions with recursive neural networks for morphol- ogy. In Proceedings of the Seventeenth Confer- ence on Computational Natural Language Learn- ing, pages 104-113.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Google Code Archive: word2vec",
"authors": [
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov. 2013. Google Code Archive: word2vec. https://code.google.com/archive/p/ word2vec/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed rep- resentations of words and phrases and their com- positionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natu- ral language processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Ex- ploring the limits of transfer learning with a uni- fied text-to-text transformer. Journal of Ma- chine Learning Research, 21:1-67.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "\u0158eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim \u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Work- shop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta. ELRA. http: //is.muni.cz/publication/884893/en.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Erik",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Sang and Fien De Meulder. 2003. Introduc- tion to the conll-2003 shared task: Language- independent named entity recognition. arXiv preprint cs/0306050.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Model-based word embeddings from decompositions of count matrices",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1282--1291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Stratos, Michael Collins, and Daniel Hsu. 2015. Model-based word embeddings from de- compositions of count matrices. In Proceed- ings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1282-1291.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analy- sis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 353-355.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Cosine similarity of incorrect to correct stochastic gradient as a function of C and k assuming that the norm of the gradient with respect to each embedding is fixed to a shared constant. The minimum cosine similarity displayed in this plot is 0.303.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Hours to train CBOW (left) and SG (right) embeddings as a function of number of worker threads for different Word2vec implementations.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Average target and source vector norms as a function of context width for the correct and incorrect implementations of CBOW. All embeddings were trained for five epochs over a one million sentence sample of the Wikipedia corpus.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Package Objective</td><td/><td>ws353</td><td/><td>men</td><td>rw</td><td>syn</td><td>mixed</td><td>AVG</td></tr><tr><td>Gensim</td><td>SG CBOW</td><td/><td colspan=\"2\">72.2/61.9 61.5/62.6</td><td>72.1/72.5 69.2/70.0</td><td>46.8/44.9 43.2/38.1</td><td>72.8/72.4 69.9/71.1</td><td>82.5/81.3 79.1/78.0</td><td>69.3/66.6 64.6/64.0</td></tr><tr><td/><td>SG</td><td/><td colspan=\"2\">72.8/62.0</td><td>72.7/72.0</td><td>44.9/43.8</td><td>73.6/71.8</td><td>83.0/80.2</td><td>69.4/66.0</td></tr><tr><td/><td>CBOW [s]</td><td/><td colspan=\"2\">61.2/62.6</td><td>68.8/69.4</td><td>48.6/39.9</td><td>73.4/74.7</td><td>78.3/76.8</td><td>66.1/64.7</td></tr><tr><td>k\u014dan</td><td>CBOW [f]</td><td/><td colspan=\"2\">70.6/65.8</td><td>74.0/74.6</td><td>49.0/45.5</td><td>76.7/76.5</td><td>83.8/82.1</td><td>70.8/68.9</td></tr><tr><td/><td colspan=\"2\">CBOW [s]; C4</td><td colspan=\"2\">72.9/68.0</td><td>81.0/79.9</td><td>51.1/50.3</td><td>80.5/79.9</td><td>81.5/81.1</td><td>73.4/71.8</td></tr><tr><td/><td colspan=\"7\">CBOW [f]; C4 74.1/68.1 79.8/80.2 53.1/54.8 82.9/81.6</td><td>82.2/81.9</td><td>74.4/73.3</td></tr><tr><td/><td>Package</td><td colspan=\"2\">Objective</td><td/><td>SST-2</td><td>QNLI</td><td>MNLI-m</td><td>MNLI-mm</td></tr><tr><td/><td>Baselines</td><td colspan=\"2\">Random GloVe</td><td/><td>83.49/81.4 85.09/83.6</td><td>63.88/63.9 66.37/67.7</td><td>61.41/60.3 66.93/66.7</td><td>61.41/59.6 66.93/66.1</td></tr><tr><td/><td>Gensim</td><td colspan=\"2\">SG CBOW</td><td colspan=\"3\">86.93/85.6 88.07/86.7 69.06/70.7 69.39/67.9</td><td>68.02/67.3 68.15/68.2</td><td>68.02/67.9 68.15/66.9</td></tr><tr><td/><td>k\u014dan</td><td colspan=\"2\">SG CBOW</td><td/><td>86.24/84.5 88.42/85.3</td><td>69.06/68.2 68.17/68.6</td><td colspan=\"2\">68.20/67.4 69.22/68.4 69.22/68.6 68.20/67.7</td></tr></table>",
"text": "Table 1: Intrinsic evaluation of Wikipedia-trained Word2vec embeddings on dev/test folds. Spearman's rank correlation coefficient is reported for: wordsim353, men, and rw, and accuracy for: syn and mixed. AVG is the average across all five tasks. The best test performance for each task is bolded. CBOW [s] refers to the standard, incorrect implementation of CBOW, and CBOW [f ] is the fixed version.",
"html": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>Frozen</td><td>Finetuned</td></tr><tr><td colspan=\"4\">Package Objective Dev Test Dev Test</td></tr><tr><td/><td>Random</td><td colspan=\"2\">83.6 75.4 83.0 74.3</td></tr><tr><td>Gensim</td><td>SG</td><td colspan=\"2\">92.5 88.4 92.9 88.2</td></tr><tr><td/><td>CBOW</td><td colspan=\"2\">90.1 84.4 90.0 85.3</td></tr><tr><td>k\u014dan</td><td>SG CBOW</td><td colspan=\"2\">92.5 88.2 92.9 88.0 92.3 87.8 92.3 87.4</td></tr></table>",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Percent F1 on the CoNLL 2003 English NER heldout sets for each set of word embeddings.",
"html": null
}
}
}
}