{ "paper_id": "D19-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:08:33.303681Z" }, "title": "Room to Glo: A Systematic Comparison of Semantic Change Detection Approaches with Word Embeddings", "authors": [ { "first": "Philippa", "middle": [], "last": "Shoemark", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "country": "UK" } }, "email": "p.j.shoemark@ed.ac.uk" }, { "first": "Farhana", "middle": [], "last": "Ferdousi", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "country": "UK" } }, "email": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "country": "UK" } }, "email": "dnguyen@turing.ac.uk" }, { "first": "Scott", "middle": [ "A" ], "last": "Hale \u2660 \u2020", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "country": "UK" } }, "email": "scott.hale@oii.ox.ac.uk" }, { "first": "Barbara", "middle": [], "last": "Mcgillivray", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Alan Turing Institute", "location": { "country": "UK" } }, "email": "bmcgillivray@turing.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Word embeddings are increasingly used for the automatic detection of semantic change; yet, a robust evaluation and systematic comparison of the choices involved has been lacking. We propose a new evaluation framework for semantic change detection and find that (i) using the whole time series is preferable over only comparing between the first and last time points; (ii) independently trained and aligned embeddings perform better than continuously trained embeddings for long time periods; and (iii) that the reference point for comparison matters. We also present an analysis of the changes detected on a large Twitter dataset spanning 5.5 years.", "pdf_parse": { "paper_id": "D19-1007", "_pdf_hash": "", "abstract": [ { "text": "Word embeddings are increasingly used for the automatic detection of semantic change; yet, a robust evaluation and systematic comparison of the choices involved has been lacking. We propose a new evaluation framework for semantic change detection and find that (i) using the whole time series is preferable over only comparing between the first and last time points; (ii) independently trained and aligned embeddings perform better than continuously trained embeddings for long time periods; and (iii) that the reference point for comparison matters. We also present an analysis of the changes detected on a large Twitter dataset spanning 5.5 years.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic change, i.e., the change in the meanings of words, is inherent in language. A new meaning for a word can be added to the original one, become more or less prevalent, or even replace a former meaning (see Koch, 2016 ). An example is lit, which has gained a new sense of 'exciting' or 'awesome', via the extension of its longestablished use as slang for 'intoxicated' to describe the vibrant environment in which acts of becoming intoxicated often occur. 1 Automatically measuring semantic change can discover changes that would not be apparent from manual inspection. It can also facilitate the investigation of mechanisms driving semantic changes, e.g., how these changes are affected by languageinternal and social factors. Moreover, there are direct benefits to applications, such as the detection of meaning shifts in polarized words to update sentiment lexicons and the detection of emerging word meanings to update dictionaries.", "cite_spans": [ { "start": 213, "end": 223, "text": "Koch, 2016", "ref_id": "BIBREF13" }, { "start": 462, "end": 463, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Word embeddings are increasingly used for automatic semantic change detection (Kutuzov et al., 2018) . Words are mapped to low-dimensional vectors, and the semantic change of a word is then measured by comparing its vectors across time periods. Although word embeddings have emerged as one of the most popular approaches to measuring semantic change, researchers are faced with various decisions, including whether to train embeddings independently or continuously, which metric to use to measure change between two time periods, and which ranking approach to use for comparing semantic change candidates.", "cite_spans": [ { "start": 78, "end": 100, "text": "(Kutuzov et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A major challenge in developing semantic change detection systems is obtaining ground truth data (Kutuzov et al., 2018) , which has so far prevented a systematic evaluation of different approaches. Many studies rely on hand-picked examples (e.g., Wijaya and Yeniterzi, 2011; Rodda et al., 2017) or human judgements (e.g., Tredici et al., 2018) . Some studies have performed evaluations based on dictionary data (e.g., Cook et al., 2014; Basile and McGillivray, 2018) , manual annotation of dictionary senses in corpora (McGillivray et al., 2019) , and manual annotation of word types (Kenter et al., 2015) , but this approach is not well-suited for recent, yet-to-berecorded changes.", "cite_spans": [ { "start": 97, "end": 119, "text": "(Kutuzov et al., 2018)", "ref_id": "BIBREF15" }, { "start": 247, "end": 274, "text": "Wijaya and Yeniterzi, 2011;", "ref_id": "BIBREF33" }, { "start": 275, "end": 294, "text": "Rodda et al., 2017)", "ref_id": "BIBREF24" }, { "start": 322, "end": 343, "text": "Tredici et al., 2018)", "ref_id": "BIBREF32" }, { "start": 418, "end": 436, "text": "Cook et al., 2014;", "ref_id": "BIBREF4" }, { "start": 437, "end": 466, "text": "Basile and McGillivray, 2018)", "ref_id": "BIBREF1" }, { "start": 519, "end": 545, "text": "(McGillivray et al., 2019)", "ref_id": "BIBREF17" }, { "start": 584, "end": 605, "text": "(Kenter et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a new framework to evaluate semantic change detection systems (Section 5.1). We model multiple semantic change scenarios and compare the impact of different choices that are typical when using word embeddings to analyse semantic change (Section 5.2). Our framework is not specific to the use of word embeddings and can also support the evaluation of other approaches not considered in this paper. 2 We 2 The dataset and all code for this paper is available at https://github.com/ alan-turing-institute/room2glo. then apply the approaches to 5.5 years of Twitter data and provide an in-depth analysis of the topranked semantic change candidates (Section 6).", "cite_spans": [ { "start": 423, "end": 424, "text": "2", "ref_id": null }, { "start": 428, "end": 429, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been increasing interest in automatic semantic change detection (Tang, 2018; Kutuzov et al., 2018) , using methods ranging from neural models to Bayesian learning (e.g., Frermann and Lapata, 2016) , Temporal Random Indexing (e.g., Basile and McGillivray, 2018) and dynamic topic modelling (e.g., Blei and Lafferty, 2006) . Word embeddings have been especially popular (e.g., Dubossarsky et al., 2017; Hamilton et al., 2016b) , and recently Bamler and Mandt (2017) and Rudolph and Blei (2018) explored dynamic embeddings for semantic change detection by training a joint model over all time periods.", "cite_spans": [ { "start": 74, "end": 86, "text": "(Tang, 2018;", "ref_id": "BIBREF30" }, { "start": 87, "end": 108, "text": "Kutuzov et al., 2018)", "ref_id": "BIBREF15" }, { "start": 180, "end": 206, "text": "Frermann and Lapata, 2016)", "ref_id": "BIBREF6" }, { "start": 241, "end": 270, "text": "Basile and McGillivray, 2018)", "ref_id": "BIBREF1" }, { "start": 306, "end": 330, "text": "Blei and Lafferty, 2006)", "ref_id": "BIBREF2" }, { "start": 385, "end": 410, "text": "Dubossarsky et al., 2017;", "ref_id": "BIBREF5" }, { "start": 411, "end": 434, "text": "Hamilton et al., 2016b)", "ref_id": "BIBREF9" }, { "start": 450, "end": 473, "text": "Bamler and Mandt (2017)", "ref_id": "BIBREF0" }, { "start": 478, "end": 501, "text": "Rudolph and Blei (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Most previous work analysed corpora spanning long time periods (e.g., a few centuries), such as the Google Books Ngrams corpus and the Corpus of Historical American English (e.g., Hamilton et al., 2016b) . Recently short-term semantic changes have been studied, for example in Amazon Reviews (Kulkarni et al., 2015) , scientific papers (Rudolph and Blei, 2018), news articles (Tang et al., 2016; Yao et al., 2018) , and the UK Web Archive (Basile and McGillivray, 2018) .", "cite_spans": [ { "start": 180, "end": 203, "text": "Hamilton et al., 2016b)", "ref_id": "BIBREF9" }, { "start": 292, "end": 315, "text": "(Kulkarni et al., 2015)", "ref_id": "BIBREF14" }, { "start": 376, "end": 395, "text": "(Tang et al., 2016;", "ref_id": "BIBREF31" }, { "start": 396, "end": 413, "text": "Yao et al., 2018)", "ref_id": "BIBREF34" }, { "start": 439, "end": 469, "text": "(Basile and McGillivray, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we focus on social media: in particular, on Twitter. Semantic change in social media has only been lightly explored, with studies on Twitter (Kulkarni et al., 2015) , the VKontake social network (Stewart et al., 2017) , and Reddit (Tredici et al., 2018) . In comparison to these studies, our data covers a longer time period and our evaluation more deeply explores the various choices involved in semantic change detection.", "cite_spans": [ { "start": 156, "end": 179, "text": "(Kulkarni et al., 2015)", "ref_id": "BIBREF14" }, { "start": 210, "end": 232, "text": "(Stewart et al., 2017)", "ref_id": "BIBREF28" }, { "start": 239, "end": 268, "text": "Reddit (Tredici et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Much of the previous work on semantic change discovery has relied on qualitative evaluations of small samples from the output, case studies of a few well-known historical changes (e.g., Kim et al., 2014; Hamilton et al., 2016a,b; Stewart et al., 2017) , or attested changes extracted from dictionaries (e.g., Rohrdantz et al., 2011; Cook et al., 2014; Basile and McGillivray, 2018) . Some evaluations have been based on related tasks for which performance is expected to correlate, such as classifying the time period a text snippet belongs to (Mihalcea and Nastase, 2012) or predicting real-world events (Kutuzov et al., 2017 ).", "cite_spans": [ { "start": 186, "end": 203, "text": "Kim et al., 2014;", "ref_id": "BIBREF12" }, { "start": 204, "end": 229, "text": "Hamilton et al., 2016a,b;", "ref_id": null }, { "start": 230, "end": 251, "text": "Stewart et al., 2017)", "ref_id": "BIBREF28" }, { "start": 309, "end": 332, "text": "Rohrdantz et al., 2011;", "ref_id": "BIBREF25" }, { "start": 333, "end": 351, "text": "Cook et al., 2014;", "ref_id": "BIBREF4" }, { "start": 352, "end": 381, "text": "Basile and McGillivray, 2018)", "ref_id": "BIBREF1" }, { "start": 544, "end": 572, "text": "(Mihalcea and Nastase, 2012)", "ref_id": "BIBREF18" }, { "start": 605, "end": 626, "text": "(Kutuzov et al., 2017", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Here we look for meaning changes over a short, recent time period. There is little existing literature on words that have undergone meaning change within the relevant time-frame, and language on social media is not always fully reflected in general language dictionaries. Moreover, even if we were able to obtain a substantial list of attested meaning changes, a system might still discover other valid meaning change candidates. Unfortunately, determining the validity of semantic change candidates is time-consuming, labourintensive, and subjective; so, building on prior approaches (e.g., Kulkarni et al., 2015; Rosenfeld and Erk, 2018; Nguyen and Eisenstein, 2017) , we introduce a new synthetic evaluation framework for semantic change detection. Synthetic evaluation is especially important for short, recent time periods given the lack of other resources for evaluation, but it is also valuable for longer periods to detect hitherto unknown changes. Moreover, phenomena like seasonal trends are more likely to interfere with semantic change detection for short time periods, making this a challenging use case for semantic change detection. At the same time, it is an important use case in order to advance semantic change detection for contemporary data to be used to update lexicons, sentiment/polarity ratings, and other language resources.", "cite_spans": [ { "start": 592, "end": 614, "text": "Kulkarni et al., 2015;", "ref_id": "BIBREF14" }, { "start": 615, "end": 639, "text": "Rosenfeld and Erk, 2018;", "ref_id": "BIBREF26" }, { "start": 640, "end": 668, "text": "Nguyen and Eisenstein, 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We collected tweets from Twitter's 'statuses/sample' streaming API endpoint from January 1, 2012, to June 30, 2017. There are a few minor gaps in our data due to occasional data collection issues. Most are a few minutes or at most a day, but one gap spans January and February 2015. Overall, our dataset consists of over 7 billion tweets sent during 1,889 days.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We use the Compact Language Detector version 2 (CLDv2), 3 following guidance from Graham et al. 2014, and we discard any tweets for which CLD detects less than 90% of the text to be in English, resulting in roughly 2.5 billion tweets. The remaining tweets are then lowercased, and usernames, urls, and non-alphanumeric characters (except emoji and hashtags) are removed. The text is then tokenized on whitespace. Digit-only tokens are replaced with ''. Finally, we discard tweets that are duplicated within a given month, as tweets which are re-tweeted or copied verba-68 tim many times are not independent language samples and may exert undue influence on embeddings (Mikolov et al., 2018) . Our final dataset consists of 1,696,142,020 tweets and 20,273,497,107 tokens.", "cite_spans": [ { "start": 673, "end": 695, "text": "(Mikolov et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Following the approach introduced by Kim et al. (2014) and adopted by Hamilton et al. (2016b) and others, we divide our dataset into discrete time periods, and for each time period t we compute word embeddings, representing each word w by a ddimensional vector. We then compare the embeddings between different time periods to measure the semantic change of words. We use monthly bins, but the approach is applicable to time periods of any length, provided there is sufficient data in each bin to train quality embeddings.", "cite_spans": [ { "start": 37, "end": 54, "text": "Kim et al. (2014)", "ref_id": "BIBREF12" }, { "start": 70, "end": 93, "text": "Hamilton et al. (2016b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We train word embeddings using gensim's (\u0158eh\u016f\u0159ek and Sojka, 2010) implementation of the continuous bag of words (CBOW; Mikolov et al., 2013) model. 4 Two evaluation tasks (word similarity using the dataset Wordsim353 5 and word analogy using the word test dataset 6 ) were used to tune four hyperparameters, resulting in 200 dimensions, a window size of 10, 15 iterations, and a minimum frequency of 500 (per time-step). For all other hyperparameters we use gensim's default values.", "cite_spans": [ { "start": 119, "end": 140, "text": "Mikolov et al., 2013)", "ref_id": "BIBREF19" }, { "start": 148, "end": 149, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Word Embeddings", "sec_num": "4.1" }, { "text": "To compare embeddings for a word between two time-points, the embeddings need to be in the same coordinate axes. We experiment with three approaches: (1) Training continuously by initializing the embeddings for a given time-step t with the embeddings trained at the previous time-step t \u2212 1 (e.g., Kim et al., 2014) ; (2) Training embeddings for each time-step independently and posthoc aligning them (e.g., Hamilton et al., 2016b; Kulkarni et al., 2015) using orthogonal Procrustes 4 We only report results using CBOW in this paper. We found similar trends when using the skip-gram model, which has been used in previous works on semantic change (e.g., Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016b; Stewart et al., 2017; Tredici et al., 2018) .", "cite_spans": [ { "start": 298, "end": 315, "text": "Kim et al., 2014)", "ref_id": "BIBREF12" }, { "start": 408, "end": 431, "text": "Hamilton et al., 2016b;", "ref_id": "BIBREF9" }, { "start": 432, "end": 454, "text": "Kulkarni et al., 2015)", "ref_id": "BIBREF14" }, { "start": 483, "end": 484, "text": "4", "ref_id": null }, { "start": 654, "end": 671, "text": "Kim et al., 2014;", "ref_id": "BIBREF12" }, { "start": 672, "end": 694, "text": "Kulkarni et al., 2015;", "ref_id": "BIBREF14" }, { "start": 695, "end": 718, "text": "Hamilton et al., 2016b;", "ref_id": "BIBREF9" }, { "start": 719, "end": 740, "text": "Stewart et al., 2017;", "ref_id": "BIBREF28" }, { "start": 741, "end": 762, "text": "Tredici et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Comparable Embeddings", "sec_num": "4.2" }, { "text": "5 http://www.cs.technion.ac.il/~gabr/ resources/data/wordsim353/ 6 http://www.fit.vutbr.cz/~imikolov/ rnnlm/word-test.v1.txt (as used by Hamilton et al., 2016b) ; and (3) combining continuous training and post-hoc alignment (as in Stewart et al., 2017) .", "cite_spans": [ { "start": 137, "end": 160, "text": "Hamilton et al., 2016b)", "ref_id": "BIBREF9" }, { "start": 231, "end": 252, "text": "Stewart et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Comparable Embeddings", "sec_num": "4.2" }, { "text": "We compare two measures for quantifying a word's semantic change between two time points. The first is the cosine distance, a common approach in previous work (Hamilton et al., 2016b; Stewart et al., 2017; Dubossarsky et al., 2017; Kim et al., 2014) . The second measure, introduced by Hamilton et al. 2016a, is based on comparing the neighbourhoods of the embeddings. For each time-step t, we first find the ordered set of word w's k nearest neighbours, based on cosine similarity. Following Hamilton et al. 2016a, we set k = 25. For any two time-steps, we then take the union S of the two nearest neighbour sets and create a second-order vector", "cite_spans": [ { "start": 159, "end": 183, "text": "(Hamilton et al., 2016b;", "ref_id": "BIBREF9" }, { "start": 184, "end": 205, "text": "Stewart et al., 2017;", "ref_id": "BIBREF28" }, { "start": 206, "end": 231, "text": "Dubossarsky et al., 2017;", "ref_id": "BIBREF5" }, { "start": 232, "end": 249, "text": "Kim et al., 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring Semantic Change", "sec_num": "4.3" }, { "text": "v t where each entry v (i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring Semantic Change", "sec_num": "4.3" }, { "text": "t contains the cosine similarity of target word w to neighbouring word S (i) at time t. We then measure the cosine distance between these two second-order vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring Semantic Change", "sec_num": "4.3" }, { "text": "Our goal is not only to measure semantic changes for pre-selected words, but to identify which words out of the entire vocabulary have undergone the greatest or most significant semantic change. We compare several approaches to generating ranked lists of the 'most changed' words. The first only measures the change between two time-steps. The remaining approaches consider the whole time series. For the approaches that use the whole time-series, we limit the semantic change candidates to words that occur at least 500 times in at least 75% of the time-steps, simply condensing a word's time-series if there are gaps. Two-step approach We first measure each word's semantic change between just two preselected time-steps (in this study, the first and final time-steps). This simple approach has been used in previous work, such as Kim et al. (2014) .", "cite_spans": [ { "start": 833, "end": 850, "text": "Kim et al. (2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "Change-point detection Following Kulkarni et al. (2015) , we choose one time-step t 0 as a reference and compute semantic change scores for each word with respect to t 0 at every other time-step t i . Then, for each word w and each time-step t i , we compute a mean-shift score by partitioning w's time series of semantic change scores at t i , and calculating the difference between the means of the scores in the two partitions. Following Kulkarni et al. 2015, we use Monte Carlo permutation tests to estimate the statistical significance of mean-shift scores, and take the time-step with the lowest estimated p-value as the change point. Words are first sorted in descending order of the mean-shift scores of their estimated change points and then in ascending order of their p-values.", "cite_spans": [ { "start": 33, "end": 55, "text": "Kulkarni et al. (2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "Kulkarni et al. standardized each word's cosine score for a given time-step relative to the mean score across all words at that time-step. This is meant to help control for corpus artefacts, e.g., shifting sampling biases over time, but its impact has not been demonstrated yet. We compare the results of ranking words without standardization (raw scores) and with standardization (z scores).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "Global trend detection We also compare three approaches to detect global trends in the same time series of semantic change scores as the change point detection methods. The first approach is fitting a linear regression model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "d i = \u03b1 + \u03b2t i + i ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "where d i is semantic change distance scores, t i is time periods {1, ..., n}, and i is error. We rank the words based on their absolute \u03b2 values (slopes), which gives the semantic change per time period under the assumption of a linear relationship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "We also experiment with two correlation measures: Pearson's (r) and Kendall's rank (\u03c4 ) correlation coefficients. In contrast to linear regression and Pearson's correlation coefficient, Kendall's tau is non-parametric and resistant to outliers (Kendall, 1948) . It is therefore often used for measuring trends in time series and change point detection (Quessy et al., 2013) . We rank the words based on the absolute values of \u03c4 and r.", "cite_spans": [ { "start": 244, "end": 259, "text": "(Kendall, 1948)", "ref_id": "BIBREF10" }, { "start": 352, "end": 373, "text": "(Quessy et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Ranking Semantic Change Candidates", "sec_num": "4.4" }, { "text": "To systematically compare the different methodological choices we introduce a new synthetic evaluation framework. We create seven schemas for how a word's distributional statistics may change. Three of these model scenarios in which a semantic change occurs, but crucially the remaining four model scenarios that we would not wish to classify as semantic change. Our framework builds on previous approaches that have modelled one type of semantic change-either a word gaining an additional sense (Kulkarni et al., 2015) or a word's original sense being completely replaced (Rosenfeld and Erk, 2018) . Furthermore, although most work has focused on recall, our framework can also test precision, i.e., the ability to distinguish the injected changes from noise.", "cite_spans": [ { "start": 496, "end": 519, "text": "(Kulkarni et al., 2015)", "ref_id": "BIBREF14" }, { "start": 573, "end": 598, "text": "(Rosenfeld and Erk, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic Evaluation", "sec_num": "5" }, { "text": "We first randomly sample 10% of the tweets from a single month from the middle of our empirical dataset (Dec. 2014). We then draw 66 random 70% samples with replacement from this sample. These 66 samples represent a dataset of 66 months (5.5 years) in which no semantic changes occur, but words' distributional statistics still vary from month to month due to sampling noise. This differs from, e.g., Kulkarni et al. (2015) , who used a series of exact duplicates of an initial set of documents. Finally, we inject controlled changes by inserting made-up 'pseudowords', carefully changing their frequencies and co-occurrence distributions throughout the time series.", "cite_spans": [ { "start": 401, "end": 423, "text": "Kulkarni et al. (2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "5.1" }, { "text": "Our procedure for inserting pseudowords is as follows: we split the real words that occur in our empirical data for December 2014 into five equally sized frequency bins. For each pseudoword \u03c1 that we insert, we choose a frequency bin. To represent one of the senses of \u03c1, we sample a real word w from the relevant frequency bin. For each synthetic month m, we insert \u03c1 replacing each token of w with success probability p (\u03c1,w,m) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "5.1" }, { "text": "For example, we might insert one pseudoword replacing the instances of the word 'pudding' with a fixed probability throughout the whole time series, and then insert this same pseudoword replacing the instances of the word 'neon' with increasing probability over time. This would model a word that initially has a meaning related to 'pudding', but which then acquires a new sense related to 'neon'. We use seven different schemas: three model different kinds of semantic change (C1-C3), and four model ephemeral changes that we aim to avoid (D1-D4): see Figure 1 . C1: Description: This schema models a word that gradually acquires a new sense over time while retaining its original sense (e.g., snowflake, lit). This corresponds to what Koch (2016, 24) calls 'innovative meaning change' and Tahmasebi et al. (2018, 35) calls 'novel word sense'. Procedure: Sample one real word w 1 to represent the original pseudosense 7 of the pseudoword \u03c1 and another real word w 2 to rep- Figure 1 : Illustration of our seven schemas for inserting pseudowords into the synthetic dataset. Each line represents a different pseudosense. Lines chart the probability of inserting a pseudoword token replacing a token representing the relevant pseudosense, as a function of 'time'. We vary whether the success probabilities change linearly or logarithmically and the time-steps at which the changes begin and end.", "cite_spans": [ { "start": 737, "end": 752, "text": "Koch (2016, 24)", "ref_id": null }, { "start": 791, "end": 818, "text": "Tahmasebi et al. (2018, 35)", "ref_id": null } ], "ref_spans": [ { "start": 553, "end": 561, "text": "Figure 1", "ref_id": null }, { "start": 975, "end": 983, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset Construction", "sec_num": "5.1" }, { "text": "resent its new pseudosense. For each token of w 1 that occurs in the synthetic dataset for month m, insert a token of \u03c1 replacing it with success probability p (\u03c1,w 1 ,m) , which remains constant throughout the time series. Insert \u03c1 replacing each token of w 2 with success probability p (\u03c1,w 2 ,m) , which starts low and gradually increases over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Construction", "sec_num": "5.1" }, { "text": ": Description: This schema models a word that gradually acquires a new sense over time while its original sense gradually falls out of use (cf., silly, which originally meant 'happy' or 'lucky' and now means 'foolish'). This corresponds to the full cycle of genesis and disappearance of lexical polysemy as described by Koch (2016, 25) , i.e., an 'innovative meaning change' and a 'reductive meaning change'. Procedure: Sample one real word w 1 to represent the original sense of the pseudoword \u03c1, and another real word w 2 to represent its new sense. For each token of w 1 that occurs in the synthetic dataset for month m, insert a token of \u03c1 replacing it with success probability p (\u03c1,w 1 ,m) , which starts relatively high and gradually decreases over time. Insert \u03c1 replacing each token of w 2 with success probability p (\u03c1,w 2 ,m) = 1 \u2212 p (\u03c1,w 1 ,m) .", "cite_spans": [ { "start": 320, "end": 335, "text": "Koch (2016, 25)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C2", "sec_num": null }, { "text": "Description: This schema models a word with many senses, different random subsets of which are relatively frequent each month (e.g., an acronym that can refer to many different entities, which may trend at different times). Over time, the word acquires an additional, more stable sense whose frequency tiple real senses, since the real word we use to represent this pseudosense may itself have multiple senses. There are few words in our dataset with only one sense according to Word-Net; so, we restrict our choice to words for which WordNet lists no more than 10 senses. We also require that none of the real words chosen to represent different pseudosenses of a given pseudoword have any senses in common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C3:", "sec_num": null }, { "text": "does not fluctuate so much from month to month. An example is BLM, which has been used to refer to a baseball magazine, a marketing company, a music label, the US Bureau of Land Management, etc., but since 2013 has been consistently associated with the Black Lives Matter movement. This could be considered a 'reductive' meaning change-in-progress, as we start out with multiple competing senses, and one sense gradually comes to dominate without the others having yet died out (see Koch, 2016, Fig. 2) . Procedure: Sample eight real words {w 1 , w 2 , ..., w 8 } to represent eight different pseudosenses for the pseudoword \u03c1. For each month m, draw a multinomial distribution D m over the first seven sampled words, using a Dirichlet prior with uniform, sparsity-inducing alpha. Replacing each token of a word w i , i \u2208 [1, 7], insert a token of \u03c1 with success probability D m w i . Let w 8 represent the new, more stable pseudosense, and for each month m, insert a token of \u03c1 replacing each token of w 8 with success probability p (\u03c1,w 8 ,m) , which starts low and gradually increases over time. D1: Description: This schema models a word that becomes more frequent over time, but does not change its co-occurrence distribution. Procedure: Sample one real word w to represent the meaning of the pseudoword \u03c1. For each token of w that occurs in the synthetic dataset for month m, insert a token of \u03c1 replacing the token of w with success probability p (\u03c1,w,m) . p (\u03c1,w,m) starts relatively low and gradually increases over time.", "cite_spans": [ { "start": 483, "end": 502, "text": "Koch, 2016, Fig. 2)", "ref_id": null }, { "start": 1454, "end": 1461, "text": "(\u03c1,w,m)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C3:", "sec_num": null }, { "text": ": Description: This schema models a word with two senses. One sense is relatively infrequent, but suddenly spikes in frequency (e.g., due to a trending topic), before becoming infrequent again. Procedure: Sample two real words w 1 and w 2 to represent the two pseudosenses. Insert \u03c1 replacing each token of w 1 with probability p (\u03c1,w 1 ,m) , which remains constant throughout the time series, and replacing each token of w 2 with probability p (\u03c1,w 2 ,m) , which starts relatively low, rapidly increases and then rapidly decreases again. D3: Description: This schema models a word with two senses: one is usually relatively infrequent, but spikes in frequency at periodic intervals (i.e., during the same month every year). An example is turkey, whose 'poultry' sense tends to be much more frequent around American Thanksgiving and Christmas. Procedure: Sample two real words w 1 and w 2 to represent the two pseudosenses. Insert \u03c1 replacing each token of w 1 with probability p (\u03c1,w 1 ,m) , which remains constant throughout the time series, and replace each token of w 2 with probability p (\u03c1,w 2 ,m) , which is relatively low for most time-steps but rapidly spikes around the same month each year. D4: Description: Like C3, this schema models words that can refer to many different entities, but in this case, an additional, more sta-ble sense does not emerge. Procedure: Sample seven real words {w 1 , w 2 , ..., w 7 } to represent different pseudosenses for \u03c1. For each month m, draw a multinomial distribution D m over these seven words, using a Dirichlet prior with uniform, sparsity-inducing alpha. Replacing each token of a word w i , i \u2208 [1, 7], insert a token of \u03c1 with success probability D m w i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D2", "sec_num": null }, { "text": "For each of these seven schemas, we create thirty pseudowords (six for each of our five frequency bins), and we vary whether the success probabilities change linearly or logarithmically and the time-steps at which the changes begin and end. In total, we insert 90 pseudowords using Schemas C1-C3, which model genuine semantic changes that we would like to be able to detect, and 120 pseudowords using Schemas D1-D4, which model real changes in words' use statistics but do not reflect semantic change. The synthetic dataset also contains 887,926 real words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "D2", "sec_num": null }, { "text": "We evaluate systems by how highly they rank pseudowords from schemas C1-C3 using the Average Precision @ K, which approximates the area under a precision-recall curve over the interval from 0 to K. It is defined as the sum, over every rank r in the top-K list of semantic change candidates, of the precision at rank r multiplied by the change in recall between ranks r \u2212 1 and r: AP @K = K r=1 P (r)\u2206R(r), where P (r) is the percentage of top-r candidates which are pseudowords belonging to Schemas C1-C3, and R(r) is the percentage of all C1-C3 pseudowords that appear in the top-r. The results are shown in Table 1 (two-step approach, comparing the first and last time steps) and Table 2 (whole time series). Precision-recall curves for the time series approaches are shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 609, "end": 616, "text": "Table 1", "ref_id": null }, { "start": 779, "end": 787, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Continuous training does not ensure that embeddings are comparable. We experiment with three configurations for training the embeddings for the two-step approach: 1) training the embeddings for each time-step independently (ind.); 2) initializing the embeddings for the final time-step with those trained on the first time-step (cont.); and 3) continuous training throughout the whole series, so that the final time-step's embeddings are initialized with the data from all preceding timesteps (cont. whole series).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Continuous training has been used without separate alignment (e.g., Kim et al., 2014) as each time period is a continuation of the embeddings from the previous period. Table 1 shows, however, that alignment is necessary for continuously trained embeddings using the whole series as well as for independent ones when using the cosine distance measure. It is likely that the huge number of training updates in the entire time series causes the embeddings to drift considerably. For the time series approaches, we therefore did not apply the cosine measure without first aligning embeddings.", "cite_spans": [ { "start": 68, "end": 85, "text": "Kim et al., 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Using the whole time series is more effective than comparing the first and the last time steps. Overall, the approaches using the whole time series ( Table 2 ) are more effective than the two-step approaches; particularly with regard to finding C1 pseudowords and avoiding D4 pseudowords.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Continuous training provides no benefit for time series approaches. For the time series approaches, independent training tends to perform better than continuous training ( Table 2 ). The lack of improvement with continuous training is particularly noteworthy as independent training is more computationally efficient than continuous training, since different time periods can be trained in parallel. We did not explore the impact of different hyperparameter choices on continuous training, but ind. cont. cont. (whole series) cosine (unaligned) 0.00 0.32 0.00 cosine (aligned) 0.25 0.32 0.27 neighbourhood 0.28 0.34 0.30 Table 1 : Average Precision @ 50 on the synthetic dataset of the two-step approach with CBOW note this would introduce another level of complexity in tuning model parameters.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 2", "ref_id": null }, { "start": 621, "end": 628, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Different time series approaches are best paired with different similarity measures. Hamilton et al. (2016a) found that the neighbourhood-based measure tends to assign higher rates of semantic change to nouns, while the cosine measure tends to assign higher rates to verbs. However, they did not compare the overall effectiveness of these methods for semantic change detection. We find that the neighbourhood-based measure 8 tends to outperform the cosine measure for the change point detection approaches; however, cosine tends to outperform the neighbourhood measure for correlation approaches (see Table 2 ). For change point detection, standardization of the time series does not have a consistent effect.", "cite_spans": [ { "start": 85, "end": 108, "text": "Hamilton et al. (2016a)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 601, "end": 608, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "The reference point for comparison matters. For almost all configurations in Table 2 , the AP @ 50 is better when the reference point is the last time-step. Figure 3 shows Recall@K broken down by pseudoword type. For types C1-C3, higher recall is better. Conversely, lower recall is better for types D1-D4, since these model changes that we do not consider to be lasting semantic changes. Recall is consistently low for types D1-D4, but strikingly, recall is also low for type C3 when we compare to the first-step. Schema C3 models words whose distributions change drastically from time-step to time-step, but which gradually become more stable as a new, consistently occurring sense emerges. The representation for the first time-step will thus be very different from subsequent representations, such that comparing to the first step is not effective. In contrast, comparing to the first time-step is expected to be more effective in finding words that become less stable over time.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 84, "text": "Table 2", "ref_id": null }, { "start": 157, "end": 165, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "Correlation-based approaches perform worse than regression or change point detection approaches. Pearson's correlation coefficient is Table 2 : Average Precision @ 50 on the synthetic dataset using time series approaches with CBOW. Change point methods are raw scores (raw) and standardized scores (z). Global trend methods are linear regression (\u03b2), Pearson correlation coefficient (r), and Kendall rank correlation coefficient (\u03c4 ).", "cite_spans": [], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "maximized when the magnitude of the change between consecutive time periods is consistent over all time periods whereas maximizing Kendall's \u03c4 simply requires the change between consecutive time periods to be of a consistent sign. Both correlation measures therefore have particularly poor recall of words that have time periods without a consistent meaning as in the early time periods for pseudowords of type C3. The \u03b2 value of the linear regression assumes a linear relationship, but is unfortunately sensitive to outliers (Chatterjee and Hadi, 1986) , which likely explains why the regression approach has higher recall than change point approaches for schema D4 (Figure 3 ), in which a stable sense does not emerge. In general, however, the \u03b2 values produced for D4 pseudowords appear to be smaller in magnitude than genuine semantic changes (C1-C3) resulting in average precision measures that generally match or exceed change point approaches (Table 2) . Regression is also more straightforward and computationally efficient to calculate than change point measures.", "cite_spans": [ { "start": 526, "end": 553, "text": "(Chatterjee and Hadi, 1986)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 667, "end": 676, "text": "(Figure 3", "ref_id": null }, { "start": 950, "end": 959, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.2" }, { "text": "We now apply the approaches on our full empirical Twitter dataset. Table 3 shows the top 10 semantic change candidates using independent, aligned CBOW embeddings. When using continuously trained embeddings, the top-10 lists are similar.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "In line with our synthetic results, we find different candidates when comparing to the first timestep or the last, but they appear to represent similar kinds of semantic change. Most have shifted due to associations with named entities. For example, vine (Figure 4a ) acquired a new sense in January 2013 when the popular short-form video hosting service Vine was launched. Similarly, ig, initially shorthand for 'i guess', became shorthand for the social network Instagram as it gained popularity. The embedding for shawn shifted signif- Figure 3 : Recall@K of each schema, using independently trained embeddings and the neighbourhoodbased measure. 'first'/ 'last' denotes the reference time-step, 'cp' the unstandardized change-point approach, and 'beta' the linear regression approach. C1-C3: higher recall is better, D1-D4: lower is better.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 265, "text": "(Figure 4a", "ref_id": "FIGREF1" }, { "start": 539, "end": 547, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "icantly around the beginning of 2014, when the singer Shawn Mendes signed a record deal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "There are also words whose embeddings have shifted due to waning associations with prominent named entities, e.g., vow was initially associated with The Vow, a high grossing movie released in Feb. 2012, but by the end of the time series it had shifted back towards synonyms like 'pledge' and 'urge'. Likewise the embedding for temple initially reflected the popularity of the video game Temple Run but gradually shifted to the word's canonical meaning, and the embedding for bcs initially reflected its usage as an acronym for Bowl Championship Series (a selection system in American college football), but then shifted towards", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "Comparing to first time-step vine, temple, unfollowers, favorited, mcm, glo, #ipadgames, shawn, retweeted, vow Comparing to last time-step isis, yasss, bcs, temple, , mcm, , ig, mila, glo Table 3 : Top 10 semantic change candidates of the change-point detection approach without standardization, using independently trained and aligned CBOW embeddings and the cosine distance measure.", "cite_spans": [ { "start": 35, "end": 110, "text": "temple, unfollowers, favorited, mcm, glo, #ipadgames, shawn, retweeted, vow", "ref_id": null } ], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "'bcoz', 'bec', and other forms of 'because' after the selection system was ended in 2013. There are also examples of neologisms: mcm is a lexicalized acronym for 'Man Crush Monday'. This initially referred to the meme of posting about a man one finds attractive each Monday, but then by metonymic extension came to be used to refer to the subject of the post himself. Another example is glo, which in the beginning of our data (Figure 4b ) occurs mainly in reference to a Nigerian telecommunication company. A shift in its embedding is driven by the sudden emergence of the expression 'glo up', which was coined in August 2013 by rapper Chief Keef in the song \"Gotta Glo Up One Day\", and later gained traction as an expression to describe an impressive personal transformation.", "cite_spans": [], "ref_spans": [ { "start": 427, "end": 437, "text": "(Figure 4b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "Finally, there are words whose detected changepoints reflect changes in automated activity. For example, the embedding for yasss shifts in early 2017 due to a sudden proliferation of tweets automatically posted to users' Twitter accounts by the live video streaming app LiveMe, which all begin with the text 'YASSS It's time for a great show' followed by the title and link to the video stream. Conversely, the detected change for favorited ( Figure 4c ) coincides with a sudden disappearance of automatically generated tweets about favorited YouTube videos.", "cite_spans": [], "ref_spans": [ { "start": 443, "end": 452, "text": "Figure 4c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results on Empirical Twitter Data", "sec_num": "6" }, { "text": "In this paper, we presented a new evaluation framework and systematically compared the various choices involved in using word embeddings for semantic change detection. We then applied the approaches to a Twitter dataset spanning 5.5 years. Qualitative analysis found that the top ranked words have undergone genuine semantic change although some of the changes are restricted to social media or to Twitter specifically. Our framework and dataset can also be used to evaluate approaches not considered in this paper. Moreover, our framework models different semantic change scenarios, and future work could focus on approaches that are able to distinguish between these different scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/CLD2Owners/cld2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A single pseudosense may in practice correspond to mul-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We apply this to unaligned embeddings; alignment with orthogonal Procrustes has no effect on this measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by The Alan Turing Institute under the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/N510129/1. P.S. was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK EP-SRC (grant EP/L016427/1) and the University of Edinburgh. D.N. was supported by Turing award TU/A/000006 and B.McG. by Turing award TU/A/000010 (RG88751). S.A.H. was supported in part by The Volkswagen Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Dynamic word embeddings", "authors": [ { "first": "Robert", "middle": [], "last": "Bamler", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Mandt", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "380--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th In- ternational Conference on Machine Learning, vol- ume 70 of Proceedings of Machine Learning Re- search, pages 380-389.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discovery Science, volume 11198 of Lecture Notes in Computer Science, chapter Exploiting the Web for Semantic Change Detection", "authors": [ { "first": "Pierpaolo", "middle": [], "last": "Basile", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Mcgillivray", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierpaolo Basile and Barbara McGillivray. 2018. Dis- covery Science, volume 11198 of Lecture Notes in Computer Science, chapter Exploiting the Web for Semantic Change Detection. Springer-Verlag.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dynamic topic models", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Lafferty", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. M. Blei and J. D. Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113-120.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Influential observations, high leverage points, and outliers in linear regression", "authors": [ { "first": "Samprit", "middle": [], "last": "Chatterjee", "suffix": "" }, { "first": "Ali", "middle": [ "S" ], "last": "Hadi", "suffix": "" } ], "year": 1986, "venue": "Statistical Science", "volume": "1", "issue": "3", "pages": "379--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samprit Chatterjee and Ali S. Hadi. 1986. Influential observations, high leverage points, and outliers in linear regression. Statistical Science, 1(3):379-393.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Novel word-sense identification", "authors": [ { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Jey", "middle": [ "Han" ], "last": "Lau", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1624--1635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Cook, Jey Han Lau, Diana McCarthy, and Tim- othy Baldwin. 2014. Novel word-sense identifica- tion. In Proceedings of COLING 2014, the 25th In- ternational Conference on Computational Linguis- tics: Technical Papers, pages 1624-1635.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Outta control: Laws of semantic change and inherent biases in word representation models", "authors": [ { "first": "Haim", "middle": [], "last": "Dubossarsky", "suffix": "" }, { "first": "Daphna", "middle": [], "last": "Weinshall", "suffix": "" }, { "first": "Eitan", "middle": [], "last": "Grossman", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1136--1145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haim Dubossarsky, Daphna Weinshall, and Eitan Grossman. 2017. Outta control: Laws of semantic change and inherent biases in word representation models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1136-1145.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A bayesian model of diachronic meaning change", "authors": [ { "first": "Lea", "middle": [], "last": "Frermann", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "31--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31-45.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Where in the world are you? geolocation and language identification in Twitter", "authors": [ { "first": "Mark", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Scott", "middle": [ "A" ], "last": "Hale", "suffix": "" }, { "first": "Devin", "middle": [], "last": "Gaffney", "suffix": "" } ], "year": 2014, "venue": "The Professional Geographer", "volume": "66", "issue": "4", "pages": "568--578", "other_ids": { "DOI": [ "10.1080/00330124.2014.907699" ] }, "num": null, "urls": [], "raw_text": "Mark Graham, Scott A. Hale, and Devin Gaffney. 2014. Where in the world are you? geolocation and lan- guage identification in Twitter. The Professional Geographer, 66(4):568-578.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Cultural shift or linguistic drift? comparing two computational measures of semantic change", "authors": [ { "first": "Jure", "middle": [], "last": "William L Hamilton", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing", "volume": "2016", "issue": "", "pages": "2116--2121", "other_ids": {}, "num": null, "urls": [], "raw_text": "William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Cultural shift or linguistic drift? comparing two computational measures of semantic change. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing. Conference on Empirical Methods in Natural Language Process- ing, volume 2016, pages 2116-2121.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Diachronic word embeddings reveal statistical laws of semantic change", "authors": [ { "first": "William", "middle": [ "L" ], "last": "Hamilton", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1489--1501", "other_ids": { "DOI": [ "10.18653/v1/P16-1141" ] }, "num": null, "urls": [], "raw_text": "William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statisti- cal laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1489-1501.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Rank correlation methods", "authors": [ { "first": "Maurice", "middle": [ "G" ], "last": "Kendall", "suffix": "" } ], "year": 1948, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maurice G. Kendall. 1948. Rank correlation methods. Griffin, London.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Ad hoc monitoring of vocabulary shifts over time", "authors": [ { "first": "Tom", "middle": [], "last": "Kenter", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Wevers", "suffix": "" }, { "first": "Pim", "middle": [], "last": "Huijnen", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM '15", "volume": "", "issue": "", "pages": "1191--1200", "other_ids": { "DOI": [ "10.1145/2806416.2806474" ] }, "num": null, "urls": [], "raw_text": "Tom Kenter, Melvin Wevers, Pim Huijnen, and Maarten de Rijke. 2015. Ad hoc monitoring of vo- cabulary shifts over time. In Proceedings of the 24th ACM International on Conference on Informa- tion and Knowledge Management, CIKM '15, pages 1191-1200, New York, NY, USA. ACM.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Temporal analysis of language through neural language models", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yi-I", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Hanaki", "suffix": "" }, { "first": "Darshan", "middle": [], "last": "Hegde", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science", "volume": "", "issue": "", "pages": "61--65", "other_ids": { "DOI": [ "10.3115/v1/W14-2517" ] }, "num": null, "urls": [], "raw_text": "Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of lan- guage through neural language models. In Proceed- ings of the ACL 2014 Workshop on Language Tech- nologies and Computational Social Science, pages 61-65.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Meaning change and semantic shifts", "authors": [ { "first": "Peter", "middle": [], "last": "Koch", "suffix": "" } ], "year": 2016, "venue": "The Lexical Typology of Semantic Shifts", "volume": "", "issue": "", "pages": "21--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Koch. 2016. Meaning change and semantic shifts. In P\u00e4ivi Juvonen and Maria Koptjevskaja- Tamm, editors, The Lexical Typology of Seman- tic Shifts, pages 21-66. De Gruyter Mouton, Berlin/Boston.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Statistically significant detection of linguistic change", "authors": [ { "first": "Vivek", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "625--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detec- tion of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625-635.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Diachronic word embeddings and semantic shifts: a survey", "authors": [ { "first": "Andrey", "middle": [], "last": "Kutuzov", "suffix": "" }, { "first": "Lilja", "middle": [], "last": "\u00d8vrelid", "suffix": "" }, { "first": "Terrence", "middle": [], "last": "Szymanski", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1384--1397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrey Kutuzov, Lilja \u00d8vrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embed- dings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1384-1397.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Temporal dynamics of semantic relations in word embeddings: an application to predicting armed conflict participants", "authors": [ { "first": "Andrey", "middle": [], "last": "Kutuzov", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Velldal", "suffix": "" }, { "first": "Lilja", "middle": [], "last": "\u00d8vrelid", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1824--1829", "other_ids": { "DOI": [ "10.18653/v1/D17-1194" ] }, "num": null, "urls": [], "raw_text": "Andrey Kutuzov, Erik Velldal, and Lilja \u00d8vrelid. 2017. Temporal dynamics of semantic relations in word embeddings: an application to predicting armed conflict participants. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1824-1829.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A computational approach to lexical polysemy in Ancient Greek", "authors": [ { "first": "B", "middle": [], "last": "Mcgillivray", "suffix": "" }, { "first": "S", "middle": [], "last": "Hengchen", "suffix": "" }, { "first": "V", "middle": [], "last": "L\u00e4teenoja", "suffix": "" }, { "first": "M", "middle": [], "last": "Palma", "suffix": "" }, { "first": "A", "middle": [], "last": "Vatri", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. McGillivray, S. Hengchen, V. L\u00e4teenoja, M. Palma, and A. Vatri. 2019. A computational approach to lexical polysemy in Ancient Greek. Digital Schol- arship in the Humanities.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Word epoch disambiguation: Finding how words change over time", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "259--263", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Vivi Nastase. 2012. Word epoch disambiguation: Finding how words change over time. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 259-263.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "International Conference on Learning Representations (ICLR) Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In International Conference on Learning Representations (ICLR) Workshop.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A kernel independence test for geographical language variation", "authors": [ { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "3", "pages": "567--592", "other_ids": { "DOI": [ "10.1162/COLI_a_00293" ] }, "num": null, "urls": [], "raw_text": "Dong Nguyen and Jacob Eisenstein. 2017. A kernel in- dependence test for geographical language variation. Computational Linguistics, 43(3):567-592.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multivariate kendall's tau for change-point detection in copulas", "authors": [ { "first": "Jean-Fran\u00e7ois", "middle": [], "last": "Quessy", "suffix": "" }, { "first": "M\u00e9riem", "middle": [], "last": "Sa\u00efd", "suffix": "" }, { "first": "Anne-Catherine", "middle": [], "last": "Favre", "suffix": "" } ], "year": 2013, "venue": "Canadian Journal of Statistics", "volume": "41", "issue": "1", "pages": "65--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean-Fran\u00e7ois Quessy, M\u00e9riem Sa\u00efd, and Anne- Catherine Favre. 2013. Multivariate kendall's tau for change-point detection in copulas. Canadian Journal of Statistics, 41(1):65-82.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Software framework for topic modelling with large corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In Pro- ceedings of the LREC 2010 Workshop on New Chal- lenges for NLP Frameworks, pages 45-50, Valletta, Malta.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Panta Rei: Tracking Semantic Change with Distributional Semantics in Ancient Greek", "authors": [ { "first": "Martina", "middle": [ "A" ], "last": "Rodda", "suffix": "" }, { "first": "S", "middle": [ "G" ], "last": "Marco", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Senaldi", "suffix": "" }, { "first": "", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2017, "venue": "Italian Journal of Computational Linguistics", "volume": "3", "issue": "", "pages": "11--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martina A. Rodda, Marco S.G. Senaldi, and Alessan- dro Lenci. 2017. Panta Rei: Tracking Semantic Change with Distributional Semantics in Ancient Greek. Italian Journal of Computational Linguis- tics, 3:11-24.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Towards tracking semantic change by visual analytics", "authors": [ { "first": "Christian", "middle": [], "last": "Rohrdantz", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Hautli", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Butt", "suffix": "" }, { "first": "Daniel", "middle": [ "A" ], "last": "Keim", "suffix": "" }, { "first": "Frans", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "305--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Rohrdantz, Annette Hautli, Thomas Mayer, Miriam Butt, Daniel A. Keim, and Frans Plank. 2011. Towards tracking semantic change by visual analytics. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 305- 310.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Deep neural models of semantic shift", "authors": [ { "first": "Alex", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "474--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Rosenfeld and Katrin Erk. 2018. Deep neural models of semantic shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 474-484.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Dynamic embeddings for language evolution", "authors": [ { "first": "R", "middle": [], "last": "Maja", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Rudolph", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 World Wide Web Conference on World Wide Web", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": { "DOI": [ "10.1145/3178876.3185999" ] }, "num": null, "urls": [], "raw_text": "Maja R. Rudolph and David M. Blei. 2018. Dynamic embeddings for language evolution. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, pages 1003-1011.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Measuring, predicting and visualizing short-term change in word representation and usage in vkontakte social network", "authors": [ { "first": "Ian", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Dustin", "middle": [], "last": "Arendt", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bell", "suffix": "" }, { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017)", "volume": "", "issue": "", "pages": "672--675", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Stewart, Dustin Arendt, Eric Bell, and Svitlana Volkova. 2017. Measuring, predicting and visual- izing short-term change in word representation and usage in vkontakte social network. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media (ICWSM 2017), pages 672- 675.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Survey of Computational Approaches to Diachronic Conceptual Change", "authors": [ { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Borin", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Jatowt", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of Computational Approaches to Diachronic Conceptual Change.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A state-of-the-art of semantic change computation", "authors": [ { "first": "Xuri", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2018, "venue": "Natural Language Engineering", "volume": "24", "issue": "5", "pages": "649--676", "other_ids": { "DOI": [ "10.1017/S1351324918000220" ] }, "num": null, "urls": [], "raw_text": "Xuri Tang. 2018. A state-of-the-art of semantic change computation. Natural Language Engineer- ing, 24(5):649-676.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Semantic change computation: A successive approach", "authors": [ { "first": "Xuri", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Weiguang", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Xiaohe", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "World Wide Web -Internet & Web Information Systems", "volume": "19", "issue": "", "pages": "375--415", "other_ids": { "DOI": [ "10.1007/s11280-014-0316-y" ] }, "num": null, "urls": [], "raw_text": "Xuri Tang, Weiguang Qu, and Xiaohe Chen. 2016. Se- mantic change computation: A successive approach. World Wide Web -Internet & Web Information Sys- tems, 19(3):375-415.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Short-term meaning shift: an exploratory distributional analysis", "authors": [ { "first": "Marco", "middle": [], "last": "Del Tredici", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.03169" ] }, "num": null, "urls": [], "raw_text": "Marco Del Tredici, Raquel Fern\u00e1ndez, and Gemma Boleda. 2018. Short-term meaning shift: an ex- ploratory distributional analysis. arXiv preprint arXiv:1809.03169.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Understanding semantic change of words over centuries", "authors": [ { "first": "Reyyan", "middle": [], "last": "Derry Tanti Wijaya", "suffix": "" }, { "first": "", "middle": [], "last": "Yeniterzi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 international workshop on DETecting and Exploiting Cultural di-versiTy on the social web", "volume": "", "issue": "", "pages": "35--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Un- derstanding semantic change of words over cen- turies. In Proceedings of the 2011 international workshop on DETecting and Exploiting Cultural di- versiTy on the social web, pages 35-40.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Dynamic word embeddings for evolving semantic discovery", "authors": [ { "first": "Zijun", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Yifan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Weicong", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18", "volume": "", "issue": "", "pages": "673--681", "other_ids": { "DOI": [ "10.1145/3159652.3159703" ] }, "num": null, "urls": [], "raw_text": "Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic word embeddings for evolving semantic discovery. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM '18, pages 673- 681.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Precision-recall plots for times series approaches for k in range [0, 1000]. Left: Change point methods with Raw and Standardized (z) scores. Right: global trend methods including linear regression (Beta), Pearson correlation coefficient (r), and Kendall rank correlation coefficient (tau). Dashed lines use the first timestep as the reference point for comparison while solid lines use the last timestep.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Neighbourhood-based distance (solid blue lines) and frequency (dotted red lines) over time, for three semantic change candidates. Vertical green lines indicate the automatically estimated change-points.", "uris": null } } } }