{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:14.093318Z" }, "title": "Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation", "authors": [ { "first": "Liqun", "middle": [], "last": "Shao", "suffix": "", "affiliation": {}, "email": "lishao@microsoft.com" }, { "first": "Sahitya", "middle": [], "last": "Mantravadi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tom", "middle": [], "last": "Manzini", "suffix": "", "affiliation": {}, "email": "thmanzin@microsoft.com" }, { "first": "Alejandro", "middle": [], "last": "Buendia", "suffix": "", "affiliation": {}, "email": "albuendi@microsoft.com" }, { "first": "Manon", "middle": [], "last": "Knoertzer", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Soundar", "middle": [], "last": "Srinivasan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "", "affiliation": {}, "email": "chrisq@microsoft.com" }, { "first": "Microsoft", "middle": [], "last": "Corp", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language interfaces (NLIs) have become a ubiquitous part of modern life. Such interfaces are used to converse with personal assistants (e.g., Apple Siri, Amazon Alexa, Google Assistant, Microsoft Cortana), to search for and gather information (Google, Bing), and to interact with others on social media. One developing use case is to aid the user during composition by suggesting words, phrases, sentences, and even paragraphs that complete the user's thoughts (Radford et al., 2019) .", "cite_spans": [ { "start": 469, "end": 491, "text": "(Radford et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Personalization of these interfaces is a natural step forward in a world where the vocabulary, grammar, and language can differ hugely user to user (Ishikawa, 2015; Rabinovich et al., 2018) . Numerous works have described personalization in NLIs in audio rendering devices (Morse, 2008) , digital assistants (Chen et al., 2014) , telephone interfaces (Partovi et al., 2005) , etc. We explore an approach for personalization of language models * Indicates equal contributions (LMs) for use in downstream NLIs on composition assistance, and replicate previous work to show that interpolating a global long short-term memory network (LSTM) model with user-personalized n-gram models provides per-user performance improvements when compared with only a global LSTM model (Chen et al., 2015 (Chen et al., , 2019 . We extend that work by providing new strategies to interpolate the predictions of these two models. We evaluate these strategies on a publicly available set of Reddit user comments and show that our interpolation strategies deliver a 5.2% perplexity lift. Finally, we describe methods for handling the crucial edge case of out-of-vocabulary (OOV) tokens 1 .", "cite_spans": [ { "start": 148, "end": 164, "text": "(Ishikawa, 2015;", "ref_id": "BIBREF4" }, { "start": 165, "end": 189, "text": "Rabinovich et al., 2018)", "ref_id": "BIBREF13" }, { "start": 273, "end": 286, "text": "(Morse, 2008)", "ref_id": "BIBREF11" }, { "start": 308, "end": 327, "text": "(Chen et al., 2014)", "ref_id": "BIBREF0" }, { "start": 351, "end": 373, "text": "(Partovi et al., 2005)", "ref_id": "BIBREF12" }, { "start": 767, "end": 785, "text": "(Chen et al., 2015", "ref_id": "BIBREF2" }, { "start": 786, "end": 806, "text": "(Chen et al., , 2019", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, the contributions of this work are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We evaluate several approaches to handle OOV tokens, covering edge cases not discussed in the LM personalization literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We provide novel analysis and selection of interpolation coefficients for combining global models with user-personalized models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We experimentally analyze trade-offs and evaluate our personalization mechanisms on public data, enabling replication by the research community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language modeling is a critical component for many NLIs, and personalization is a natural direction to improve these interfaces. Several published works have explored personalization of language models using historical search queries (Jaech and Ostendorf, 2018) , features garnered from social graphs (Wen et al., 2012; Tseng et al., 2015; Lee et al., 2016) , and transfer learning techniques (Yoon et al., 2017) . Other work has explored using profile information (location, name, etc.) as additional features to condition trained models (Shokouhi, 2013; Jaech and Ostendorf, 2018) . Specifically, in the NLI domain, Google Smart Compose (Chen et al., 2019) productized the approach described in (Chen et al., 2015) by using a linear interpolation of a general background model and a personalized n-gram model to personalize LM predictions in the email authoring setting. We view our work as a natural extension to this line of research because strategies that improve personalization at the language modeling level drive results at the user interface level.", "cite_spans": [ { "start": 234, "end": 261, "text": "(Jaech and Ostendorf, 2018)", "ref_id": "BIBREF5" }, { "start": 301, "end": 319, "text": "(Wen et al., 2012;", "ref_id": "BIBREF17" }, { "start": 320, "end": 339, "text": "Tseng et al., 2015;", "ref_id": "BIBREF16" }, { "start": 340, "end": 357, "text": "Lee et al., 2016)", "ref_id": "BIBREF9" }, { "start": 393, "end": 412, "text": "(Yoon et al., 2017)", "ref_id": "BIBREF18" }, { "start": 539, "end": 555, "text": "(Shokouhi, 2013;", "ref_id": "BIBREF15" }, { "start": 556, "end": 582, "text": "Jaech and Ostendorf, 2018)", "ref_id": "BIBREF5" }, { "start": 639, "end": 658, "text": "(Chen et al., 2019)", "ref_id": "BIBREF1" }, { "start": 697, "end": 716, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The goal of text prediction is strongly aligned with language modeling. The task of language modeling is to predict which words come next, given a set of context words. In this paper, we explore using a combination of both large scale neural LMs and small scale personalized n-gram LMs. This combination has been studied in the literature (Chen et al., 2015) and has been found to be performant. We describe mechanisms for extending this previous work in this section. Once trained, we compute the perplexity of these models not by exponentiation of the cross entropy, but rather by explicitly predicting the probability of test sequences. In practice this model is to be used to rerank sentence completion sequences. As a result, it is impossible to ignore the observation of OOV tokens.", "cite_spans": [ { "start": 339, "end": 358, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized Interpolation Model", "sec_num": "3" }, { "text": "Back-off n-gram LMs (Kneser and Ney, 1995) have been widely adopted given their simplicity, and efficient parameter estimation and discounting algorithms further improve robustness (Chen et al., 2015) . Compared with DNN-based models, n-gram LMs are computationally cheap to train, lightweight to store and query, and fit well even on small data-crucial benefits for personalization. Addressing the sharp distributions and sparse data issues in n-gram counts is critical. We rely on Modified Kneser-Ney smoothing (James, 2000) , which is generally accepted as one of the most effective smoothing techniques.", "cite_spans": [ { "start": 20, "end": 42, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF8" }, { "start": 181, "end": 200, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" }, { "start": 492, "end": 526, "text": "Kneser-Ney smoothing (James, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Personalized n-gram LMs", "sec_num": "3.1" }, { "text": "For large scale language modeling, neural network methods can produce dramatic improvements in predictive performance (Jozefowicz et al., 2016) . Specifically, we use LSTM cells (Hochreiter and Schmidhuber, 1997) , known for their ability to capture long distance context without vanishing gradients. By computing the softmax function on the output scores of the LSTM we can extract the LSTM's per-token approximation as language model probabilities.", "cite_spans": [ { "start": 118, "end": 143, "text": "(Jozefowicz et al., 2016)", "ref_id": "BIBREF7" }, { "start": 178, "end": 212, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Global LSTM", "sec_num": "3.2" }, { "text": "We use perplexity (PP) to evaluate the performance of our LMs. PP is a measure of how well a probability model predicts a sample, i.e., how well an LM predicts the next word. This can be treated as a branching factor. Mathematically, PP is the exponentiation of the entropy of a probability distribution. Lower PP is indicative of a better LM. We define lift in perplexity (PP lift) as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PP lift = P P global \u2212 P P interpolated P P global ,", "eq_num": "(1)" } ], "section": "Evaluation", "sec_num": "3.3" }, { "text": "where P P interpolated is the perplexity of the interpolated model and P P global is the perplexity of the global LSTM model, which serves as the baseline. Higher PP lift is indicative of a better LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Past work (Chen et al., 2015) has described mechanisms for interpolating global models with personalized models for each user. Our experimentation mixes a global LSTM model with the personalized n-gram models detailed above 2 . The interpolation is a linear combination of the predicted token probabilities:", "cite_spans": [ { "start": 10, "end": 29, "text": "(Chen et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Interpolation Strategies", "sec_num": "3.4" }, { "text": "P = \u03b1P personal + (1 \u2212 \u03b1)P global (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation Strategies", "sec_num": "3.4" }, { "text": "\u03b1 indicates how much personalization is added to the global model. We explore constant values of \u03b1, either globally or for each user. We compute a set of oracle \u03b1 values, the values of \u03b1 per user that empirically minimize interpolated perplexity. We compare our strategies for tuning \u03b1 to these oracle \u03b1 values, which present the best possible performance on the given user data in Section 5.3. Intuitively, users whose comments have a high proportion of tokens outside the global vocabulary will need more input from the global model than their own personalized model to accurately model their language habits. Thus, we also explore an inverse relationship between \u03b1 and each user's OOV rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpolation Strategies", "sec_num": "3.4" }, { "text": "When training on datasets with a large proportion of OOV tokens, low PP may not indicate a good model. Specifically, if the proportion of OOV tokens in the data is high, the model may assign too much mass to OOV tokens resulting in a model with a propensity to predict the OOV token. Such a model may have low PP, but only because it frequently predicts the commonly occurring OOV token. While this may be an effective model of the pure sequence of tokens, it does not align with downstream objectives present at the interface level which relies on a robust prediction of non-OOV tokens. Because of this disconnect between model and overall task objective, mitigation strategies must be implemented in order to adequately evaluate the performance of LMs in high OOV settings. We evaluate the following strategies to mitigate this behavior:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "3.5" }, { "text": "1. Do nothing, assigning OOV tokens their estimated probabilities;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "3.5" }, { "text": "2. Skip the OOV tokens, scoring only those items known in the training vocabulary; and 3. Back-off to a uniform OOV penalty, assigning a fixed probability \u03c6 to model the likelihood of selecting the OOV token 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "3.5" }, { "text": "When reporting our results we denote PP base as PP observed when using strategy 1, PP skip as PP observed when using strategy 2, and PP backoff as PP observed when using strategy 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "3.5" }, { "text": "The data for our model comes from comments made by users on the Internet social media website Reddit 4 . Reddit is a rich source of naturallanguage data with high linguistic diversity due to posts about a variety of topics, informality of language, and sheer volume of data. As a linguistic resource, Reddit comments present in a heavily conversational and colloquial tone, and users frequently use slang and misspell words. Because of this there are a high number of unique tokens. As developers of a machine learning system, we seek to balance having a large vocabulary in order to capture the most data with having a small vocabulary in order to keep the model from overfitting. We construct our vocabulary by empirically selecting the n most common tokens observed by randomly selecting Reddit user comments. We then share this vocabulary, created from the global training set, in both the personalized and global models. This value of n must be tuned based on data. When choosing a size for vocabulary, there exists a tradeoff between performance and capturing varied language. Larger vocabularies adversely impact performance but may encapsulate more variability of language. For a given vocabulary size chosen from training data for the global LSTM, we plot the resulting OOV rates for users. As can be seen when comparing Figure 1 and Figure 2 , very few gains in user-level OOV rates are seen when expanding the vocabulary size twenty-fold. Thus, we choose a vocabulary size of 50,000. For the global LSTM, we split the global distribution of Reddit data into training sourced from 2016, validation sourced from 2017, and test sourced from 2018. We sampled such that 70% of users were reserved for training, 20% of users for validation, and 10% of users for test. We allot 100, 000 users for the test set and scale the number of users in the other sets accordingly. There are 10 billion total tokens in the training data, with 29 million unique tokens. 90% of unique tokens occur 6 or fewer times, and half of users have 20 or fewer comments per year with an average comment length of Figure 2 : Histogram of OOV rates for 3265 users' training data with a vocabulary size of 1,000,000.", "cite_spans": [], "ref_spans": [ { "start": 1330, "end": 1338, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1343, "end": 1351, "text": "Figure 2", "ref_id": null }, { "start": 2094, "end": 2102, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "13 tokens. For the personalized n-grams, we selected all comment data from 3265 random Reddit users 6 who made at least one comment in each of 2016, 2017, and 2018. Then, for each user, we selected the data from 2016 as training data, the data from 2017 as validation data, and the data from 2018 as testing data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4" }, { "text": "Here we discuss the results observed when evaluating the interpolated global LSTM and userpersonalized n-gram model on users' comments using various OOV mitigation and \u03b1 interpolation strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In our data used for personalization, 68% users have more than 25% OOV rate for validation data, and 65% users have more than 25% OOV rate for training data. This empirically causes large deviations between the different PP backoff , PP skip , and PP base . We find that a personalized n-gram model can't handle OOV tokens very well in high OOV settings, because it assigns higher probabilities to OOV tokens than some of the tokens in the vocabulary. As discussed in Section 3.5 high OOV rates at the per-user level PP base present a view of the results that is disconnected from downstream use in an NLI. At the same time, PP skip presents the view most aligned with the downstream task because in an NLI the OOV token should never be shown. However, PP skip comes with some mathematical baggage. Specifically, when all tokens are OOV, the PP skip will be infinite. These two approaches represent the extremes of the strategies which could be used. We argue that PP backoff represents the best of both worlds. Figure 3 shows that PP backoff provides measurements near the minima that are closely aligned with PP skip while also being free of the mathematical and procedural issues associated with PP skip and PP base . We provide an example to further illustrate the above statement. Consider a high OOV rate comment such as \"re-titled jaff ransomware only fivnin.\" with OOV tokens re-titled, jaff, ransomware, fivnin. Following encoding, the mode would see this sequence as \"OOV OOV OOV only OOV\". When measuring the probability of this sequence a model evaluated using PP base would have lower perplexity because it has been trained to overweight the probability of OOV tokens as they occur more frequently than the tokens they represent. However, this sequence should have far lower probability, and thus higher perplexity, because the model is in fact failing to adequately model the true sequence. We argue that assigning a uniform value \u03b8 to OOV tokens will more accurately represent the performance of the model when presented with data with a high quantity of OOV tokens.", "cite_spans": [], "ref_spans": [ { "start": 1012, "end": 1020, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "5.1" }, { "text": "Because we believe that PP backoff presents the most accurate picture of model performance, we have chosen to present our results in Section 5.2 and 5.3 using PP backoff . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OOV Mitigation Strategies", "sec_num": "5.1" }, { "text": "We next present an interesting dichotomy in Figure 4 not previously discussed in the personalization literature. In the constant \u03b1 for all users setting we can optimize to either minimize the overall PP backoff for all users or to maximize the average PP backoff lift across users. These two objectives result in different constant \u03b1 5 . Specifically, minimizing PP backoff over users yields \u03b1 = 0.105, providing an improvement for 67.3% of users and an average PP backoff lift of 2.5%. Maximizing the average PP backoff lift per user yields \u03b1 = 0.041, providing an improvement for 74.2% of users and an average PP backoff lift of 2.7% 6 . ", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 53, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Analysis of Personalization", "sec_num": "5.2" }, { "text": "Coefficient \u03b1 Optimization", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constant and Personalized Interpolation", "sec_num": "5.3" }, { "text": "When searching for a constant value for \u03b1 for all users, \u03b1 = 0.105 achieves the minimum mean interpolated PP backoff , with an average PP backoff lift of 2.5%. Next, we personalize the value of \u03b1 for each user. We first produce a set of oracles 6 as described in Section 3.4. With this set of oracle values of \u03b1, the average PP backoff lift is 6.1% with the best average PP backoff achievable in this context. While it is possible to compute the oracle values for each user in a production setting, this may not be tractable when user counts are high and there exist latency constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constant and Personalized Interpolation", "sec_num": "5.3" }, { "text": "Thus, we try an inverse linear relationship: \u03b1 = k \u2022 (1 \u2212 OOV rate). To illustrate the effect of this relationship, we perform this optimization 10 times, using a different random subset of users each time to optimize k and then evaluate on the rest of the users. On average, we observe a PP backoff lift of 5.2%, and 80.1% of users achieve an improvement in PP backoff . In Figure 5 we see that a heuristic approach of lower complexity achieves near-oracle performance, with the distribution of PP backoff for this method closely matching the oracle distribu-tion of PP backoff . We also find that this method of \u03b1 personalization yields lower PP backoff for more users than using a constant value for \u03b1. ", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 383, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Constant and Personalized Interpolation", "sec_num": "5.3" }, { "text": "In this paper we presented new strategies for interpolating personalized LMs, discussed strategies for handling OOV tokens to give better vision into model performance, and evaluated these strategies on public data allowing the research community to build upon these results. Furthermore, two directions could be worth exploring: Investigate on when personalization is useful at a user level to better interpret the results; Research on user-specific vocabularies for personalized models instead of using a shared vocabulary for both the personalized and global background models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "As NLIs move closer to the user, personalization mechanisms will need to become more robust. We believe the results we have presented form a natural step in building that robustness. By analyzing the results with the lowest interpolated PP backoff (\u03b1 = 0.105 for all users), we make two observations: users with average comment length less than around 30 tokens don't get much benefit from personalization, and users with less than around 100 comments don't get much benefit from personalization. Figure 8 shows the distribution of the empiricallycomputed \"oracle\" values for \u03b1. ", "cite_spans": [], "ref_spans": [ { "start": 497, "end": 505, "text": "Figure 8", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "To the best knowledge of the authors, these edge cases are not clearly defined in the literature when combining two LMs trained on two different datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We further detail the hyperparameters and training scheme of our LSTM and n-gram models in the appendices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We consider \u03c6 to be a hyperparameter which must be tuned for each use case. In our experiments we assign \u03c6 to be 1 V , where V is the vocabulary size.4 We retrieved copies of www.reddit.com user comments from https://pushshift.io/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There may be other trade-offs to examine.6 Further details are included in the appendices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.azure.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the Microsoft Search, Assistant and Intelligence team, and in particular Geisler Antony, Kalyan Ayloo, Mikhail Kulikov, Vipul Agarwal, Anton Amirov, Nick Farn and Kunho Kim, for their invaluable help and support in this research. We also thank T. J. Hazen, Vijay Ramani, and the anonymous reviewers for their insightful feedback and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The global LSTM model trained token embeddings of size 300, and had hidden unit layers of size 256 and 128, an output projection of dimension 100, and a vocabulary of 50,000 tokens. It was trained with dropout using the Adam optimizer, and we parallel-trained our global LSTM on an Azure 7 Standard NC24s v2 machine which includes 24 vCPUs and 4 NVIDIA Tesla P100 GPUs.The personalized n-gram models were 3-gram modified Kneser-Ney smoothed models with discounting values of 0.5 (1-grams), 1 (2-grams), and 1.5 (3-grams).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Hyperparameters and Model Training", "sec_num": null }, { "text": "The average size of the user-personalized corpus is around 140 comments, while the median size is 23 comments. The average comment length for each user is around 14 tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 User Analysis Plots", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Personalized vocabulary for digital assistant", "authors": [ { "first": "Lik", "middle": [], "last": "Harry Chen", "suffix": "" }, { "first": "Adam", "middle": [ "John" ], "last": "Cheyer", "suffix": "" }, { "first": "Didier", "middle": [ "Rene" ], "last": "Guzzoni", "suffix": "" }, { "first": "Thomas", "middle": [ "Robert" ], "last": "Gruber", "suffix": "" } ], "year": 2014, "venue": "US Patent", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lik Harry Chen, Adam John Cheyer, Didier Rene Guz- zoni, and Thomas Robert Gruber. 2014. Person- alized vocabulary for digital assistant. US Patent 8,903,716.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Gmail smart compose: Real-time assisted writing", "authors": [ { "first": "Mia", "middle": [], "last": "Xu Chen", "suffix": "" }, { "first": "N", "middle": [], "last": "Benjamin", "suffix": "" }, { "first": "Gagan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Shuyuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jackie", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yinan", "middle": [], "last": "Tsay", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" }, { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Dai", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", "volume": "", "issue": "", "pages": "2287--2295", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mia Xu Chen, Benjamin N Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M Dai, Zhifeng Chen, et al. 2019. Gmail smart compose: Real-time assisted writing. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, pages 2287-2295.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Investigation of back-off based interpolation between recurrent neural network and ngram language models", "authors": [ { "first": "Xie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xunying", "middle": [], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Mark", "suffix": "" }, { "first": "Philip C", "middle": [], "last": "Gales", "suffix": "" }, { "first": "", "middle": [], "last": "Woodland", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "181--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xie Chen, Xunying Liu, Mark JF Gales, and Philip C Woodland. 2015. Investigation of back-off based in- terpolation between recurrent neural network and n- gram language models. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 181-186. IEEE.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gender differences in vocabulary use in essay writing by university students", "authors": [ { "first": "Yuka", "middle": [], "last": "Ishikawa", "suffix": "" } ], "year": 2015, "venue": "Procedia-Social and Behavioral Sciences", "volume": "192", "issue": "", "pages": "593--600", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuka Ishikawa. 2015. Gender differences in vocab- ulary use in essay writing by university students. Procedia-Social and Behavioral Sciences, 192:593- 600.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Personalized language model for query auto-completion", "authors": [ { "first": "Aaron", "middle": [], "last": "Jaech", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.09661" ] }, "num": null, "urls": [], "raw_text": "Aaron Jaech and Mari Ostendorf. 2018. Personalized language model for query auto-completion. arXiv preprint arXiv:1804.09661.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Modified kneser-ney smoothing of n-gram models. Research Institute for Advanced Computer Science", "authors": [ { "first": "Frankie", "middle": [ "James" ], "last": "", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frankie James. 2000. Modified kneser-ney smoothing of n-gram models. Research Institute for Advanced Computer Science, Tech. Rep. 00.07.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploring the limits of language modeling", "authors": [ { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improved backing-off for m-gram language modeling", "authors": [ { "first": "Reinhard", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "1995 International Conference on Acoustics, Speech, and Signal Processing", "volume": "1", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181-184. IEEE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Personalizing recurrent-neuralnetwork-based language model by social network", "authors": [ { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Tsao", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hung-Yi Lee, Bo-Hsiang Tseng, Tsung-Hsien Wen, and Yu Tsao. 2016. Personalizing recurrent-neural- network-based language model by social network.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Speech, and Language Processing", "authors": [], "year": null, "venue": "", "volume": "25", "issue": "", "pages": "519--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 25(3):519-530.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "System and method for personalizing the user interface of audio rendering devices", "authors": [ { "first": "Lee", "middle": [], "last": "Morse", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee Morse. 2008. System and method for personaliz- ing the user interface of audio rendering devices. US Patent App. 11/779,256.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Method and apparatus for content personalization over a telephone interface with adaptive personalization", "authors": [ { "first": "Hadi", "middle": [], "last": "Partovi", "suffix": "" }, { "first": "Roderick", "middle": [ "Steven" ], "last": "Brathwaite", "suffix": "" }, { "first": "Angus", "middle": [ "Macdonald" ], "last": "Davis", "suffix": "" }, { "first": "S", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Brandon", "middle": [ "William" ], "last": "Mccue", "suffix": "" }, { "first": "John", "middle": [], "last": "Porter", "suffix": "" }, { "first": "Eckart", "middle": [], "last": "Giannandrea", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Walther", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Accardi", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2005, "venue": "US Patent", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hadi Partovi, Roderick Steven Brathwaite, Angus Mac- donald Davis, Michael S McCue, Brandon William Porter, John Giannandrea, Eckart Walther, Anthony Accardi, and Zhe Li. 2005. Method and appara- tus for content personalization over a telephone in- terface with adaptive personalization. US Patent 6,842,767.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Native language cognate effects on second language lexical choice", "authors": [ { "first": "Ella", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Shuly", "middle": [], "last": "Wintner", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "329--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ella Rabinovich, Yulia Tsvetkov, and Shuly Wintner. 2018. Native language cognate effects on second language lexical choice. Transactions of the Associ- ation for Computational Linguistics, 6:329-342.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to personalize query auto-completion", "authors": [ { "first": "Milad", "middle": [], "last": "Shokouhi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "103--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milad Shokouhi. 2013. Learning to personalize query auto-completion. In Proceedings of the 36th interna- tional ACM SIGIR conference on Research and de- velopment in information retrieval, pages 103-112.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Personalizing universal recurrent neural network language model with user characteristic features by social network crowdsourcing", "authors": [ { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Lin-Shan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo-Hsiang Tseng, Hung-yi Lee, and Lin-Shan Lee. 2015. Personalizing universal recurrent neural net- work language model with user characteristic fea- tures by social network crowdsourcing. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 84-91. IEEE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications", "authors": [ { "first": "Hung-Yi", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Tai-Yuan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Lin-Shan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Hung-Yi Lee, Tai-Yuan Chen, and Lin-Shan Lee. 2012. Personalized language modeling by crowd sourcing with social network data for voice access of cloud applications. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 188-193. IEEE.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient transfer learning schemes for personalized language modeling using recurrent neural network", "authors": [ { "first": "Seunghyun", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Hyeongu", "middle": [], "last": "Yun", "suffix": "" }, { "first": "Yuna", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gyu-Tae", "middle": [], "last": "Park", "suffix": "" }, { "first": "Kyomin", "middle": [], "last": "Jung", "suffix": "" } ], "year": 2017, "venue": "Workshops at the Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seunghyun Yoon, Hyeongu Yun, Yuna Kim, Gyu-tae Park, and Kyomin Jung. 2017. Efficient transfer learning schemes for personalized language model- ing using recurrent neural network. In Workshops at the Thirty-First AAAI Conference on Artificial Intel- ligence.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Histogram of OOV rates for 3265 users' training data with a vocabulary size of 50,000." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Average of interpolated PP for all users for varied values of \u03b1 \u2264 0.7 for each method of approaching OOV tokens." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "PP backoff and average PP backoff lift over baseline for various values of \u03b1 < 0.22." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Distribution of interpolated PP backoff for users using each method of \u03b1 optimization. The values for \u03b1 = k \u2022 (1 \u2212 OOV rate) are averaged over 10 random selections." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Histogram of PP lift over global model vs. average comment length (\u03b1 = 0.105)." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Histogram of PP lift over global model vs. number of comments (\u03b1 = 0.105)." }, "FIGREF6": { "uris": null, "num": null, "type_str": "figure", "text": "Distribution of oracle values of \u03b1 per user." } } } }