{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:02.768460Z" }, "title": "Embedding Structured Dictionary Entries", "authors": [ { "first": "Steven", "middle": [ "R" ], "last": "Wilson", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh", "location": { "settlement": "Edinburgh", "country": "UK" } }, "email": "steven.wilson@ed.ac.uk" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh", "location": { "settlement": "Edinburgh", "country": "UK" } }, "email": "wmagdy@inf.ed.ac.uk" }, { "first": "Barbara", "middle": [], "last": "Mcgillivray", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "settlement": "London", "country": "UK" } }, "email": "bmcgillivray@turing.ac.uk" }, { "first": "Gareth", "middle": [], "last": "Tyson", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "settlement": "London", "country": "UK" } }, "email": "g.tyson@qmul.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous work has shown how to effectively use external resources such as dictionaries to improve English-language word embeddings, either by manipulating the training process or by applying post-hoc adjustments to the embedding space. We experiment with a multitask learning approach for explicitly incorporating the structured elements of dictionary entries, such as user-assigned tags and usage examples, when learning embeddings for dictionary headwords. Our work generalizes several existing models for learning word embeddings from dictionaries. However, we find that the most effective representations overall are learned by simply training with a skip-gram objective over the concatenated text of all entries in the dictionary, giving no particular focus to the structure of the entries.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Previous work has shown how to effectively use external resources such as dictionaries to improve English-language word embeddings, either by manipulating the training process or by applying post-hoc adjustments to the embedding space. We experiment with a multitask learning approach for explicitly incorporating the structured elements of dictionary entries, such as user-assigned tags and usage examples, when learning embeddings for dictionary headwords. Our work generalizes several existing models for learning word embeddings from dictionaries. However, we find that the most effective representations overall are learned by simply training with a skip-gram objective over the concatenated text of all entries in the dictionary, giving no particular focus to the structure of the entries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "While word embedding models are typically trained using large text corpora with objectives based on distributional semantics, recent work has shown how to take advantage of external resources like WordNet (Miller, 1995) and other manually created dictionaries in order to better capture wordlevel semantic relationships of interest. For example, previous work has used the graph structure of external resources to post-process pre-trained word embeddings, enforcing that the similarity between embeddings reflects the similarity inferred from the graph structure of lexicons like WordNet (Faruqui et al., 2015) . Following in a similar principle, others use known synonymy and antonymy relationships between words to adjust the distance between word embeddings (Mrk\u0161i\u0107 et al., 2016) . Other work uses traditional dictionaries to improve the overall coverage of word embedding models by creating embeddings for rare words be leveraging information from their definitions (Bahdanau et al., 2017) .", "cite_spans": [ { "start": 205, "end": 219, "text": "(Miller, 1995)", "ref_id": "BIBREF14" }, { "start": 588, "end": 610, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF5" }, { "start": 761, "end": 782, "text": "(Mrk\u0161i\u0107 et al., 2016)", "ref_id": "BIBREF15" }, { "start": 970, "end": 993, "text": "(Bahdanau et al., 2017)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While dictionaries have been shown to be useful, most previous work has focused only on using the text of the definitions in order to learn word representations. However, many dictionaries include additional structural elements such as usage examples, quotations containing the headword, tags, labels, and more. For some online crowd-built dictionaries, information such as the contributing users and even upvotes and downvotes are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conjecture that such meta information may prove useful and, therefore, we seek to leverage all of this additional information to build improved representations of the words defined in a given dictionary. To do this, we generalize the Consistency-Penalized Autoencoder (CPAE) (Bosc and Vincent, 2018) to allow for not only the reconstruction of dictionary definitions, but also for making predictions about the other structural elements available, such as usage examples and user-assigned tags.", "cite_spans": [ { "start": 278, "end": 302, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make the following contributions in this paper: (1) we propose a flexible, multi-task learning extension to the CPAE model that can be used to produce embeddings from structured dictionary entries, (2) we evaluate the applicability of this extended model to three English-language dictionary datasets, each with their own unique characteristics and sets of structural elements, and (3) we demonstrate the a simple baseline approach for learning word embeddings, based on the popular skip-gram with negative sampling framework, can often lead to representations that better capture word-level semantic similarity according to a range of commonly used evaluation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We consider three manually constructed, machinereadable, English-language dictionaries: English Wordnet 1 (Miller, 1995) , English Wiktionary 2 , and Urban Dictionary (UD) 3 , each containing definitions for each word in addition to one or more structural elements such as usage examples, tags, or votes (Table 1) . We find that many of the terms that are defined in Urban Dictionary are not commonly used in everyday language, and so we choose to further filter the set of headwords from Urban Dictionary to those that have been used at least 10,000 times in a sample of tweets sampled over a five-year period as identified in (Wilson et al., 2020b) .", "cite_spans": [ { "start": 106, "end": 120, "text": "(Miller, 1995)", "ref_id": "BIBREF14" }, { "start": 628, "end": 650, "text": "(Wilson et al., 2020b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 304, "end": 313, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Structured Dictionary Data", "sec_num": "2.1" }, { "text": "To provide a simple baseline for later evaluation, we train word embeddings using the entire text of each dictionary, including all structured elements, by treating each structural element as a short document and prepending the entry headword to each. We use a standard skip-gram model with negative sampling (SGNS), trained using the FastText library (Mikolov et al., 2018) .", "cite_spans": [ { "start": 352, "end": 374, "text": "(Mikolov et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Approach", "sec_num": "2.2" }, { "text": "Next, we present an approach for learning word embeddings that implicitly encode a wide range of the elements that are present in a dictionary entry. Given a word defined in a dictionary, the objective of the model is to accurately recover as much structural information as possible, including the word's definition, usage examples, tags, and authors. We also leverage user provided votes as a means of sorting and filtering the dictionary entries. The model takes a word's definition as input, and learns a transformation from the words in the definition to an embedding that contains features that describe the structural elements of the dictionary entry for the word. We treat the prediction of each type of structural element as a separate task within a multi-task learning framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Auto-encoding Structured Entries with Multi-task Learning", "sec_num": "3" }, { "text": "Our model ( Figure 1 ; a more formal, detailed description of the model is given in Appendix A) can be seen as a generalization of several others: a simple auto-encoder, Hill's model , and the consistency penalized auto-encoder Figure 1 : Model architecture for multi-task learning autoencoder for embedding words from their structured dictionary entries. Input tokens are embedding using the Input Embeddings layer, and the n tokens in the definition of headword w h are passed to the Definition Encoder to produce the definition embedding h. This embedding should be consistent (low distance) with the embedding of the definition headword e h . M possible output tasks can be used, each with its own decoder which needs to reconstruct the Target.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 1", "ref_id": null }, { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "(CPAE). In each case, the input for the model is a definition 4 for the target headword, w h . The input tokens are converted into a sequence of embeddings using a learnable word embedding layer, and these embeddings are passed to the definition encoder, which produces a single embedding, h, which is used as the representation for w h . This embedding is then fed to any number of decoders, each with their own specific objective and loss function (details in the subsections of Appendix A). The goal of each decoder's loss is to influence the weights of the encoder to produce an embedding h that is most useful for capturing a specific structural element of the dictionary entry for w h , or to retain some other important property of the embedding h. The decoders that we use and their associated losses become components in the overall loss function for our model: Table 1 : Structural elements present in three machine-readable dictionaries, and number of headwords, definitions, and total tokens present in each. UD (Filtered) is the filtered version of Urban Dictionary which doesn't contain words that are not commonly used or definitions for which the difference between the number of upvotes and downvotes is negative. This is the version of Urban Dictionary that is used when training our proposed model.", "cite_spans": [], "ref_spans": [ { "start": 871, "end": 878, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "L = \u03bb 0 L 0 + \u03bb 1 L 1 . . . + \u03bb n L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "The target of each decoder is dependent on the structural element that it is meant to encoder. For the definitions, the goal of the decoder is to reproduce the definition itself (making the use of this task alone equivalent to a simple autoencoder). For the usage examples and tags, the target task is the predict the context in which the headword appears using a skip-gram learning objective. We also experiment with using the user-provided votes to filter and sort the data, as well as to provide weights for the input definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "An additional loss term can be used in order to enforce the consistency between the learned embedding h and the input embedding for the headword e h . This is similar to the main objective of Hill's model and is the consistency penalty that is used in the CPAE model (Bosc and Vincent, 2018) . This forces the model to produce embeddings for headwords that are consistent to the embeddings produced for the same words when they appear in the definitions of other headwords.", "cite_spans": [ { "start": 267, "end": 291, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "We evaluate all produced embeddings 5 across a range of intrinsic evaluation tasks as used in (Jastrzebski et al., 2017). 6 For these word-level semantic similarity tasks, the machine generated scores (cosine similarity between the produced word embeddings) are compared against human-labeled similarity scores by computing the correlation between the two sets of scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "4" }, { "text": "The tasks involved include the Marco, Elia and Nam (MEN) annotated word pairs based on image captioning data (Bruni et al., 2014) , the SimVerb (SV) verb similarity dataset (Gerz et al., 2016) , both of which have standardized development and testing splits. We use the development splits of these datasets in order to tune our models. The WordSim-353 (WS) dataset contains both similarity (WS-S) and relatedness (WS-R) annotations for the same sets of words, allowing us to examine the ability of our models to capture each of these semantic relations. We also evaluate using the SimLex-999 dataset and a subset of that data, SimLex-333 (SL999 and SL333) (Hill et al., 2015) . The SL333 subset contains only the 333 most related pairs according to the human annotations. Stanford's Contextual Word Similarities (SCWS) dataset (Huang et al., 2012) , the 65 word pairs studied by Rubenstein and Goodenough (RG65) 1965, the Mechanical Turk (MT) dataset (Radinsky et al., 2011) , and the Rare Words (RW) dataset (Luong et al., 2014) round out the rest of our evaluatoin tasks.", "cite_spans": [ { "start": 109, "end": 129, "text": "(Bruni et al., 2014)", "ref_id": "BIBREF3" }, { "start": 173, "end": 192, "text": "(Gerz et al., 2016)", "ref_id": "BIBREF6" }, { "start": 656, "end": 675, "text": "(Hill et al., 2015)", "ref_id": "BIBREF8" }, { "start": 827, "end": 847, "text": "(Huang et al., 2012)", "ref_id": "BIBREF9" }, { "start": 879, "end": 911, "text": "Rubenstein and Goodenough (RG65)", "ref_id": null }, { "start": 951, "end": 974, "text": "(Radinsky et al., 2011)", "ref_id": "BIBREF17" }, { "start": 1009, "end": 1029, "text": "(Luong et al., 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "4" }, { "text": "For models that use our proposed architecture, we initialize the input embeddings using the baseline pre-trained skip-gram embeddings. We train these embeddings ourselves in the case of WordNet and Wikitionary, and use the ud-basic embeddings released by (Wilson et al., 2020a) for Urban Dictionary. 7 Table 2 shows the similarity and relatedness scores achieved when using various combinations of objectives in our model. 8 We observe that for WordNet, the simple SGNS embeddings are always outperformed by the other approaches, which is in line with the results reported in (Bosc and Vincent, 2018) where the CPAE-P model was found to achieve the best results when using WordNet. We can see that adding structure, which, for the case of WordNet, only includes usage examples, leads to an improvement over the base CPAE-P model in many cases. The overall trend is similar for the Wiktionary data, yet we see a stronger performance from the SGNS baseline. In fact, SGNS achieves the best results for is the structured dictionary encoder with only the consistency penalty, CPAE-P is the Consistency Penalized Autoencoder (Bosc and Vincent, 2018) with pre-trained word embedding targets, and the version with Structure is our proposed extension to the model, making use of additional training objectives based on any available structural elements. SGNS is the skip-gram with negative sampling baseline word embedding model. Bold indicates the best result for a given dictionary, underlined numbers are also the overall best.", "cite_spans": [ { "start": 423, "end": 424, "text": "8", "ref_id": null }, { "start": 576, "end": 600, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" }, { "start": 1120, "end": 1144, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "4" }, { "text": "two of the test datasets and achieves competitive results across the board, making it a viable alternative to the more complex dictionary auto-encoding approaches. Finally, for the Urban Dictionary data, we see the baseline SGNS approach overtaking the other methods in almost every evaluation set, also leading to many of the best overall scores found in this study. This shift in performance may be related to the overall size of each dataset: Urban Dictionary dataset contains approximately 200 million total tokens, compared to the 1.7 million in Word-Net and 4.6 million in English Wiktionary. Further, as Urban Dictionary's definitions contain a mixture of noisy submissions, jokes, and opinions, they are likely to be less closely tied to the true meanings of the headwords (Nguyen et al., 2018) . This could make the auto-encoding objective less useful overall in comparison to learning representations of the words simply based on their usage contexts.", "cite_spans": [ { "start": 781, "end": 802, "text": "(Nguyen et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "4" }, { "text": "We show that the extension of the CPAE model to include additional structural elements can provide some gains in word-level semantic similarity tasks, however, the the extra complexity of this approach is unnecessary for learning useful word embeddings, and in many cases, leads to degradation in the scores across a range of standard word embedding evaluation metrics in comparison to simpler approaches. To build general purpose word embeddings from a sufficiently large dictionary (i.e., containing at least several hundred million tokens of text), our recommendation is to simply concatenate all of the structural elements together as a single text, inserting the entry headword between each element, and applying the widely popular skip-gram architecture to this text to learn traditional distribution embeddings. This approach requires only a single learning objective, trains in much less time, and achieves competitive results in many cases, making it an easier alternative to explicitly leveraging structural information from dictionary entries while still creating useful embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Future work should explore how these approaches would work when applied to more English dictionaries such as the Oxford English Dictionary 9 in order to better understand the effects of using a more standardized dictionary to learn embeddings. Further, dictionaries in other languages, particularly lower-resource languages, should be considered, since our results suggest that the approaches described in this paper outperform the baseline approach mostly in settings where the total amount of text in the dictionary is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Formally, let D = {w 0 in , w 1 in , . . . , w n in } be a sequence of tokens in the definition for w h . The elements of D belong to the vocabulary of all words that appear in definitions, V in , and w h belongs to the vocabulary of all headwords, V h . In the case of polysemous words which have more than one meaning, we concatenate the tokens from all definitions together into a single sequence, and separate them by a special SEP token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "Given the full sequence of input words, E in (D) = {e 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "in , e 1 in , . . . , e n in } is the set of d indimensional embeddings representing words in the definition. These embeddings can be learned during training, or pre-initialized and frozen, as discussed later in this section. The embeddings are passed into an encoder layer in order to produce a single d h -dimensional embedding h = enc(E in (D)). The encoder can be any type of model that takes a variable-length sequence of embeddings as input and produces a single, fixedlength embedding as output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "This embedding is then fed to any number of decoders, each with their own specific objective and loss function. The goal of each decoder's loss is to influence the weights of the encoder to produce an embedding h that is most useful for capturing a specific structural element of the dictionary entry for w h , or to retain some other important property of the embedding h. In the following subsections, we describe the decoders that we use and their associated loss, which become components in the overall loss function for our model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "L = \u03bb 0 L 0 + \u03bb 1 L 1 . . . + \u03bb n L n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "for up to n objectives, each with its own associated weight term. These weights can be used to control the overall influence of the objective in the final loss computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A Detailed Model Description", "sec_num": null }, { "text": "The words in a well-formed definition should provide a precise encapsulation of one of the meanings of the headword being defined. So, we expect that a combination of the meanings of the words in the definition should provide a reasonable approximation for the meaning of the word itself. Since the input to our encoder is the set of embeddings of the definition words, a decoder objective based on the intermediate representation, h, will lead to a simple auto-encoder for the definition itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Definitions as reconstruction targets", "sec_num": null }, { "text": "The definition decoder with learned parameters \u03b8 produces a set of predictions of the words belonging to the original definitionD = dec \u03b8 (h), and this decoder is used to compute the definition reconstruction loss L R . We use a simple conditional unigram language modeling loss as our reconstruction loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Definitions as reconstruction targets", "sec_num": null }, { "text": "L R = \u2212 log p(D|\u03b8) = w\u2208D \u2212 log p(w|\u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Definitions as reconstruction targets", "sec_num": null }, { "text": "where p(w|\u03b8) is determined by the decoder dec \u03b8 . For the decoder, the auto-encoder model uses a single linear layer with input size d h and output size |V def |, followed by a softmax operation, providing a probability p(w) for all words in the output vocabulary V def . The output vocabulary V def is equal to V in for the traditional auto-encoder setting, since the objective is to reproduce the set of input words. However, in practice, we can speed up computation with minimal impact on performance by reducing V def to only contain the m def most common words, and treating all others as out-of-vocabulary. The out-of-vocabulary words are represented by a single token UNK which is ignored for the purposes of the loss computation. Including only this objective (which can be achieved by setting \u03bb t = 0 for every other task t) is equivalent to a simple definition auto-encoder: given the word in the headword's definition, produce an intermediate embedding h which can then be used to reconstruct the original set of words from the definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Definitions as reconstruction targets", "sec_num": null }, { "text": "While widely used distribution word embeddings rely on examples of words in context in order to learn representations of those words, hundreds of examples of usage of each word are usually required in order to build stable representations (Burdick et al., 2018) . We experiment with using only the few prototypical examples that are provided in the dictionary definitions themselves as training samples for the term. This has several advantages: first, no data outside of the dictionary itself is needed to train the embeddings, and second, usage examples should, by nature, be written in a way that a specific meaning of the term is emphasized, providing a potentially stronger semantic signal than randomly sampled occurrences of a term in a text corpus. Usage contexts may help to capture aspects of meaning that correspond to general semantic relatedness between words. Similarly, tags provide high-level category information related to words, and we expect that words with similar sets of tags will be related in meaning.", "cite_spans": [ { "start": 239, "end": 261, "text": "(Burdick et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Usage examples and tags as context", "sec_num": null }, { "text": "To incorporate this information into our model, we use a skip-gram language modeling objective similar to the one used by the word2vec model for learning word embeddings from word-in-context samples. That is, given the embedding for a word, h, we train a new feedforward output layer to predict the set of words that appear in the usage example context around the target word, or in the case of tags, the output layer should predict all tags. In the case of the usage examples, we replace the word and its morphological variations with a special MASK token so that the model does not learn to simply predict the word itself. Then, we define new vocabularies V use and V tag for all words that appear in usage examples in the dictionary and all tags, respectively, and we train linear layers to predict the set of usage words and tags given h. The loss L use is then the cross-entropy between the predicted distribution over V use and the equally sized vector of counts representing the number of times each word actually appeared in a usage example, and the same is done with the tag distribution to compute the tag prediction loss, L tag . As with the definition decoder, we allow for the size of the output vocabulary to be restricted to the most common m use /m tag words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Usage examples and tags as context", "sec_num": null }, { "text": "The consistency penalized auto-encoder model (CPAE) adapted an additional constraint, based on Hill's model , to minimize the distance between the input embedding e h = E in (w h ) and the learned encoder embedding h. To achieve this, the Euclidean distance between the two embeddings is minimized as an additional component of the loss, the consistency penalty: L C = (h \u2212 e h ) 2 which can only be computed for for the set of words which are both defined (headwords) and used within definitions of other words, i.e., V h \u2229 V in . When setting \u03bb t = 0 for all other tasks t, we can approximately recover Hill's model . It was previously shown (Bosc and Vincent, 2018) that initializing the weights of the input embeddings E in with pre-trained word embeddings, paired with this type of consistency constraint, can lead to improved performance on a number of word relatedness tasks (we label this setting as CPAE-P).", "cite_spans": [ { "start": 644, "end": 668, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Consistency between embeddings", "sec_num": null }, { "text": "User-provided information can be used in several ways in our method. In our current setup, there may often be too many entries for a given headword to be able to adequately focus on all of them at once using our models which rely on a recurrent encoder for the concatenation of all tokens in all definitions. In Urban Dictionary, we can rely on the signal of user-provided votes, which are applied at the entry-level. This information can help sort the set of entries by importance: when training our concatenated lists of definitions, entries, and tags, we try sorting 10 them by their net number of votes (up-votes \u2212 down-votes) so that the top scoring entries will be processed by the model first, giving them priority over the other entries. We also remove any entries that received negative net votes from the concatenated list of entries. Empirically, we found that using the voting information in this way resulted in either a minor improvement or no change in the results, and so all results presented reflect the use of votes as signals of importance where votes are available. Table 3 shows the full set of results across all three dictionaries using the same evaluation tasks as before. AE/Autoencoder is the simple autoencoder model in which the loss term only consists of the definition reconstruction penalty. CPAE is the Consistency Penalized Autoencoder (Bosc and Vincent, 2018) which is the same as the AE model with the addition of the consistency penalty. Model names ending with \"-P\" use pre-trained embeddings (the same used for the SGNS baseline) to initialize the input embedding layer of the model. Hill's model only uses the consistency penalty and always uses pre-trained embeddings to initialize the input embedding layer. SGNS is the skipgram with negative sampling baseline, and \"+Structure\" is the same as the previous row, but using out multi-task learning framework to train the model to use the structural elements available in the dictionary. For the Urban Dictionary data, for models that use pre-trained embeddings to initialize the input layer, \"Full\" indicates that those embeddings ", "cite_spans": [ { "start": 1370, "end": 1394, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1087, "end": 1094, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "A.4 Votes as signals of importance", "sec_num": null }, { "text": "have been trained on the entirety of Urban Dictionary, while \"Part\" means that the embeddings were only trained on the filtered subset (only commonly used words, no words receiving negative total votes) of Urban Dictionary that was used to train the main dictionary embedding model. The models presented in the Results section of the main paper are those that achieved the best result for at least one evaluation task for any dictionary dataset. However, from this full set of results, we can observe that the addition of structural elements through multi-task learning does lead to improvements in some cases, especially for the Urban Dictionary. This may be due to the fact that the definitions in Urban Dictionary are not always strictly providing direct meanings of the words and sometimes include jokes and opinions (Nguyen et al., 2018) , so the usage examples and tags can be used to help provide more useful signals when training the model. This phenomenon can be seen even more closely from the very poor results achieved by the plain autoencoder and CPAE models on Urban Dictionary, which achieve no better than random results on some of the evaluation tasks, indicating that the signal from the definitions in Urban Dictionary is extremely noisy, even within the filtered subset of the dictionary.", "cite_spans": [ { "start": 821, "end": 842, "text": "(Nguyen et al., 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "125", "sec_num": null }, { "text": "We train our models for a maximum of 150 epochs, implementing early-stopping using two of the intrinsic evaluation tasks which have readily available development sets: MEN (Bruni et al., 2012) and SimVerb-999 (Hill et al., 2015) . When the model average performance on these two tasks does not increase for 10 epochs in a row, we stop training and save the embeddings produced by the model which achieved the maximum average score on these development tasks. We initialize our input embeddings with the baseline FastText embeddings trained on the concatenation of all structural dictionary elements treated as plain text.", "cite_spans": [ { "start": 172, "end": 192, "text": "(Bruni et al., 2012)", "ref_id": "BIBREF2" }, { "start": 209, "end": 228, "text": "(Hill et al., 2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "C Experimental details", "sec_num": null }, { "text": "For our definition encoder, we use a 300dimensional, bidirectional GRU 11 layer followed by a single feedforward layer. We set the dimension d h to match the size of whichever pre-trained embeddings we use with that model (usually 300) so that the consistency penalty can be properly 11 We also experimented with several other simple encoder types, including the LSTM that was used in (Bosc and Vincent, 2018) , but found the bi-GRU to give consistently better or equal results with a smaller number of parameters. computed. We limit the size of each output vocabulary to the most common 10,000 words, and we limit the size of the input vocabulary to the most common 50,000 words.", "cite_spans": [ { "start": 284, "end": 286, "text": "11", "ref_id": null }, { "start": 385, "end": 409, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "C Experimental details", "sec_num": null }, { "text": "We use Adam (Kingma and Ba, 2014) as the optimizer with a learning rate of 3 \u00d7 10 \u22124 . Following (Bosc and Vincent, 2018) , we set the \u03bb value for the reconstruction task to 1 and modify the other weights proportionally. Given the previously reported importance of the consistency penalty, we set this to 64. In order to focus our search on the possible combinations of objectives, we also leave the \u03bb values for the tags and examples at 1.", "cite_spans": [ { "start": 97, "end": 121, "text": "(Bosc and Vincent, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "C Experimental details", "sec_num": null }, { "text": "Or, in the case of polysemous words, the concatenation of all tokens in all definitions, separated by a SEP token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Details of the experimental setup are in Appendix C.6 We used code from the web package, located at: https://github.com/kudkudak/ word-embeddings-benchmarks to run the intrinsic evaluation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These embeddings were trained on the entirety of Urban Dictionary rather than just the subset that we use in this study.8 Only best performing models are shown; the full set of results can be found in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.oed.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also explore using votes to weight the loss for each example, but find no significant differences in the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by The Alan Turing Institute under the EPSRC grants EP/N510129/1, and EP/S033564/1. We also acknowledge support via EP/T001569/1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning to compute word embeddings on the fly", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Stanis\u0142aw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.00286" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Tom Bosc, Stanis\u0142aw Jastrzebski, Edward Grefenstette, Pascal Vincent, and Yoshua Bengio. 2017. Learning to compute word embed- dings on the fly. arXiv preprint arXiv:1706.00286.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Auto-encoding dictionary definitions into consistent word embeddings", "authors": [ { "first": "Tom", "middle": [], "last": "Bosc", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1522--1532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Bosc and Pascal Vincent. 2018. Auto-encoding dictionary definitions into consistent word embed- dings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1522-1532.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Distributional semantics in technicolor", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136-145.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multimodal distributional semantics", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2014, "venue": "Journal of Artificial Intelligence Research", "volume": "49", "issue": "", "pages": "1--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tificial Intelligence Research, 49:1-47.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Factors influencing the surprising instability of word embeddings", "authors": [ { "first": "Laura", "middle": [], "last": "Burdick", "suffix": "" }, { "first": "Jonathan", "middle": [ "K" ], "last": "Kummerfeld", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2092--2102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Burdick, Jonathan K Kummerfeld, and Rada Mi- halcea. 2018. Factors influencing the surprising in- stability of word embeddings. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 2092-2102.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Sujay", "middle": [], "last": "Kumar Jauhar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1606--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1606-1615.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Simverb-3500: A largescale evaluation set of verb similarity", "authors": [ { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2173--2182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniela Gerz, Ivan Vuli\u0107, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simverb-3500: A large- scale evaluation set of verb similarity. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173-2182.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to understand phrases by embedding the dictionary", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "17--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, KyungHyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. Transactions of the Association for Computational Linguistics, 4:17-30.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2015, "venue": "Computational Linguistics", "volume": "41", "issue": "4", "pages": "665--695", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving word representations via global context and multiple word prototypes", "authors": [ { "first": "H", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Socher", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "873--882", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric H Huang, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873-882.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "How to evaluate word embeddings? on importance of data efficiency and simple supervised tasks", "authors": [ { "first": "Stanis\u0142aw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Damian", "middle": [], "last": "Le\u015bniak", "suffix": "" }, { "first": "Wojciech", "middle": [ "Marian" ], "last": "Czarnecki", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.02170" ] }, "num": null, "urls": [], "raw_text": "Stanis\u0142aw Jastrzebski, Damian Le\u015bniak, and Woj- ciech Marian Czarnecki. 2017. How to evaluate word embeddings? on importance of data effi- ciency and simple supervised tasks. arXiv preprint arXiv:1702.02170.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Addressing the rare word problem in neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "", "middle": [], "last": "Zaremba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1410.8206" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "\u00c9douard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov,\u00c9douard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Counter-fitting word vectors to linguistic constraints", "authors": [ { "first": "N", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "M", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "", "middle": [], "last": "Ga\u0161i\u0107", "suffix": "" }, { "first": "P", "middle": [ "H" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "", "middle": [], "last": "Su", "suffix": "" }, { "first": "", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "S", "middle": [], "last": "Wen", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016-Proceedings of the Conference", "volume": "", "issue": "", "pages": "142--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "N Mrk\u0161i\u0107, D S\u00e9aghdha, B Thomson, M Ga\u0161i\u0107, L Rojas- Barahona, PH Su, D Vandyke, TH Wen, and S Young. 2016. Counter-fitting word vectors to lin- guistic constraints. In 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL HLT 2016-Proceedings of the Conference, pages 142-148.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Emo, love and god: making sense of urban dictionary, a crowd-sourced online dictionary", "authors": [ { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "Taha", "middle": [], "last": "Yasseri", "suffix": "" } ], "year": 2018, "venue": "Royal Society open science", "volume": "5", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dong Nguyen, Barbara McGillivray, and Taha Yasseri. 2018. Emo, love and god: making sense of ur- ban dictionary, a crowd-sourced online dictionary. Royal Society open science, 5(5):172320.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A word at a time: computing word relatedness using temporal semantic analysis", "authors": [ { "first": "Kira", "middle": [], "last": "Radinsky", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "Shaul", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th international conference on World wide web", "volume": "", "issue": "", "pages": "337--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Contextual correlates of synonymy", "authors": [ { "first": "Herbert", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "B", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Communications of the ACM", "volume": "8", "issue": "10", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Urban dictionary embeddings for slang nlp applications", "authors": [ { "first": "R", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Garimella", "suffix": "" }, { "first": "", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4764--4773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven R. Wilson, Walid Magdy, Barbara McGillivray, Kiran Garimella, and Gareth Tyson. 2020a. Urban dictionary embeddings for slang nlp applications. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4764-4773.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Analyzing temporal relationships between trending terms on twitter and urban dictionary activity", "authors": [ { "first": "R", "middle": [], "last": "Steven", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "", "middle": [], "last": "Tyson", "suffix": "" } ], "year": 2020, "venue": "12th ACM Conference on Web Science, WebSci '20", "volume": "", "issue": "", "pages": "155--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven R. Wilson, Walid Magdy, Barbara McGillivray, and Gareth Tyson. 2020b. Analyzing temporal rela- tionships between trending terms on twitter and ur- ban dictionary activity. In 12th ACM Conference on Web Science, WebSci '20, page 155-163, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "TABREF2": { "type_str": "table", "text": "Correlation (Spearman's \u03c1) with gold standard similarity and relatedness scores for development and evaluation datasets. Hill's model", "html": null, "content": "", "num": null }, "TABREF4": { "type_str": "table", "text": "Full similarity and relatedness evaluation results for each dictionary.", "html": null, "content": "
", "num": null } } } }