{ "paper_id": "I13-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:15:50.711391Z" }, "title": "Hybrid Models for Lexical Acquisition of Correlated Styles", "authors": [ { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "jbrooke@cs.toronto.edu" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automated lexicon acquisition from corpora represents one way that large datasets can be leveraged to provide resources for a variety of NLP tasks. Our work applies techniques popularized in sentiment lexicon acquisition and topic modeling to the broader task of creating a stylistic lexicon. A novel aspect of our approach is a focus on multiple related styles, first extracting initial independent estimates of style based on co-occurrence with seeds in a large corpus, and then refining those estimates based on the relationship between styles. We compare various promising implementation options, including vector space, Bayesian, and graph-based representations, and conclude that a hybrid approach is indeed warranted.", "pdf_parse": { "paper_id": "I13-1010", "_pdf_hash": "", "abstract": [ { "text": "Automated lexicon acquisition from corpora represents one way that large datasets can be leveraged to provide resources for a variety of NLP tasks. Our work applies techniques popularized in sentiment lexicon acquisition and topic modeling to the broader task of creating a stylistic lexicon. A novel aspect of our approach is a focus on multiple related styles, first extracting initial independent estimates of style based on co-occurrence with seeds in a large corpus, and then refining those estimates based on the relationship between styles. We compare various promising implementation options, including vector space, Bayesian, and graph-based representations, and conclude that a hybrid approach is indeed warranted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Though lexical resources are useful for many NLP tasks, manual lexicon creation is often onerous, particularly for aspects of language for where full coverage requires hundred of thousands of annotations. This work deals with one such aspect which we refer to as stylistic variation. This should not be understood in a purely aesthetic sense, but as reflecting various high-level aspects of the text, including genre and social identity. Some tasks relevant to style so defined include genre classification (Kessler et al., 1997) , author profiling (Rosenthal and McKeown, 2011) , social relationship classification (Peterson et al., 2011) , sentiment analysis (Wilson et al., 2005) , readability classification (Collins-Thompson and Callan, 2005) , and text generation (Hovy, 1990) . Following the classic work of Biber (1988) , computational modeling of style has often focused on textual statistics and the frequency of function words and syntac-tic categories. There are, of course, manuallyconstructed lists which capture some aspects of style, for instance resources related to psycholinguistics (Coltheart, 1980) , but these are necessarily limited in scope. Our interest is in providing broad lexical coverage, potentially in any language. Here, we will show that style is particularly amenable to corpus-based automated lexical acquisition.", "cite_spans": [ { "start": 507, "end": 529, "text": "(Kessler et al., 1997)", "ref_id": "BIBREF17" }, { "start": 549, "end": 578, "text": "(Rosenthal and McKeown, 2011)", "ref_id": "BIBREF24" }, { "start": 616, "end": 639, "text": "(Peterson et al., 2011)", "ref_id": "BIBREF22" }, { "start": 661, "end": 682, "text": "(Wilson et al., 2005)", "ref_id": "BIBREF30" }, { "start": 712, "end": 747, "text": "(Collins-Thompson and Callan, 2005)", "ref_id": "BIBREF9" }, { "start": 770, "end": 782, "text": "(Hovy, 1990)", "ref_id": "BIBREF13" }, { "start": 815, "end": 827, "text": "Biber (1988)", "ref_id": "BIBREF3" }, { "start": 1102, "end": 1119, "text": "(Coltheart, 1980)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach to this problem is grounded in methods popularized for polarity lexicon creation (Turney and Littman, 2003) , but we take a more holistic view than is typical, simultaneously tackling the acquisition of several styles in a single model. Not only is this theoretically warranted, due to the correlation effects resulting from the oral/literate spectrum of register, but we also show it can offer practical gains: our hybrid models first derive initial estimates of each style from a large social media corpus, and then refine these estimates based partially on the results from other styles. We demonstrate that various popular methods are applicable to this problem, and indeed a single method might not provide the best results for all styles. For evaluation, we use a consensus annotation, the results of which also raise interesting questions about annotation for more continuous kinds of variation.", "cite_spans": [ { "start": 94, "end": 120, "text": "(Turney and Littman, 2003)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In English manuals of style and other prescriptivist texts (Strunk and White, 1979; Kane, 1983) , writers are urged to pay attention to various aspects of lexical style, including elements such as familiarity, readability, formality, fanciness, colloquialness, specificity, concreteness, and objectivity; these stylistic categories reflect common aesthetic judgments about language, but are also inextricably linked to the conventions of register and genre. See Biber and Conrad (2009) for a discussion of the relationship between register, genre, and style as traditionally defined in descriptive linguistics. Some researchers have posited a few fixed styles (Joos, 1961) or a small, discrete set of situational constraints which determine style and register (Halliday and Hasan, 1976) ; by contrast, the applied approach of Biber (1988) and theoretical framework of Leckie-Tarry (1995) offer a more continuous interpretation of register variation. In Biber's approach, functional dimensions such as Involved vs. Informational, Argumentative vs. Non-argumentative, and Abstract vs. Nonabstract are derived in an unsupervised manner from a mixed-genre corpus, with the labels assigned depending on where features (a small set of known indicators of register) and genres fall on each spectrum. The theory of Leckie-Tarry posits a single main cline of register with one pole (the oral pole) reflecting a reliance on the context of the linguistic situation, and the other (the literate pole) reflecting a reliance on cultural knowledge. The more specific elements of register are represented as subclines which are strongly influenced by this main cline, creating probabilistic relationships between related dimensions.", "cite_spans": [ { "start": 59, "end": 83, "text": "(Strunk and White, 1979;", "ref_id": "BIBREF25" }, { "start": 84, "end": 95, "text": "Kane, 1983)", "ref_id": "BIBREF15" }, { "start": 462, "end": 485, "text": "Biber and Conrad (2009)", "ref_id": "BIBREF2" }, { "start": 660, "end": 672, "text": "(Joos, 1961)", "ref_id": "BIBREF14" }, { "start": 760, "end": 786, "text": "(Halliday and Hasan, 1976)", "ref_id": null }, { "start": 826, "end": 838, "text": "Biber (1988)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Computational linguistics research most similar to ours has focused on classifying the lexicon in terms of individual aspects relevant to style (e.g. formality, specificity, readability, and concreteness) (Brooke et al., 2010; Pan and Hsieh, 2010; Kidwell et al., 2009; Turney et al., 2011) . Of particular methodological relevance is work on the induction of polarity lexicons based on cooccurrence in large corpora (Turney and Littman, 2003; Velikovich et al., 2010) , or connections in WordNet (Rao and Ravichandra, 2009; Baccianella et al., 2010) ; semi-supervised vector space and graph methods are common, and several of the methods we apply here are taken directly from or inspired by work in this area.", "cite_spans": [ { "start": 205, "end": 226, "text": "(Brooke et al., 2010;", "ref_id": "BIBREF6" }, { "start": 227, "end": 247, "text": "Pan and Hsieh, 2010;", "ref_id": "BIBREF21" }, { "start": 248, "end": 269, "text": "Kidwell et al., 2009;", "ref_id": "BIBREF18" }, { "start": 270, "end": 290, "text": "Turney et al., 2011)", "ref_id": "BIBREF28" }, { "start": 417, "end": 443, "text": "(Turney and Littman, 2003;", "ref_id": "BIBREF27" }, { "start": 444, "end": 468, "text": "Velikovich et al., 2010)", "ref_id": "BIBREF29" }, { "start": 497, "end": 524, "text": "(Rao and Ravichandra, 2009;", "ref_id": "BIBREF23" }, { "start": 525, "end": 550, "text": "Baccianella et al., 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this study, we consider six styles-colloquial, literary, concrete, abstract, subjective, and objective-which are clearly represented in the lexicon, which are mentioned often in the relevant English linguistics literature, and which have strong positive and negative correlations with other styles in the group. Many (but not all) of these correlations are related to the oral/literate distinction. Our definition of each style (adapted from our annotation guidelines) is given below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Colloquial Words which are used primarily in very informal contexts, for instance slang words and internet abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Literary Words which you would expect to see primarily in literature; these words often feel oldfashioned or flowery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Concrete Words which refer to events, objects, or properties of objects in the physical world that you would be able to see, hear, smell, or touch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Abstract Words which refer to something that requires major psychological or cultural knowledge to grasp; complex ideas which can't purely be defined in physical terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Subjective Words which are strongly emotional or reflect a personal opinion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Objective Words which are emotionally distant, explicitly avoiding any personal opinion, instead projecting a sense of disinterested authority.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Our method and evaluation relies on having a set of seed words for each style. The words used in this study were originally collected from various sources by the authors; we included words that we considered clear members of a particular stylistic category-though they might also belong to other categories-with little or no ambiguity with respect to that style. Colloquial seeds consist of English slang terms and acronyms, e.g. cuz, gig, asshole, lol. The literary seeds were primarily drawn from web sites which explain difficult language in texts such as the Bible and Lord of the Rings; examples include behold, resplendent, amiss, and thine. The concrete seeds all denote physical objects and actions, e.g. shove and lamppost, while the abstract seeds all involve nontrivial concepts patriotism and nonchalant. For our subjective seeds, we used an edited list of strongly positive and negative terms from a manually-constructed sentiment lexicon (Taboada et al., 2011) , e.g. gorgeous and depraved, and for our objective set we selected words from sets of near-synonyms where one was clearly an emotionally-distant, formal alternative, e.g. residence (for home) or occupied (for busy). We filtered initial lists to 150 of each type (900 in total), removing words which did not appear in the corpus or which occurred in multiple lists.", "cite_spans": [ { "start": 952, "end": 974, "text": "(Taboada et al., 2011)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Relying on a single annotator, however, is problematic, and a more serious issue with our original seed sets is that many of the seeds belong on multiple lists, reflecting the fact that stylistic correlations occur at the lexical level. This interferes with evaluation, since we need to to be fairly certain not only which seeds are in a category, but which are not. Therefore, we carried out a full annotation study with 5 annotators, asking each annotator to tag all 900 words for each of the 6 styles according to guidelines we prepared. One of the authors was included as an annotator (this annotation was carried out prior to all the others), but the other four were unfamiliar with the project; all were native English speakers with at least an undergraduate degree, and all reported reading a variety of text genres for work and/or pleasure. We provided written guidelines explaining each style in detail, and asked annotators to make judgments based on what they felt to be the most common sense. Communication among annotators was restricted during the process, but we allowed access to other resources (e.g. the internet) and answered general questions about the guidelines that came up during the process. A few annotators had obviously skewed numbers for certain styles relative to other annotators due to misinterpretation of the guidelines, and we provided non-specific feedback for revision in these cases. The Fleiss's kappa (Fleiss, 1971) values for our 5-way annotation study are presented in Table 1 . 1 The kappa values in Table 1 indicate agreement well above chance, but several of the dimensions (and the average) are below the 0.67 standard for reliable annotation (Artstein and Poesio, 2008) , and only one (colloquial) reaches the higher 0.8 standard. This suggests that there is a sizable subjective aspect to these judgments and we should be somewhat skeptical of the judgment of any particular annotator. However, we had forced our annotators to make a boolean choice for each style, which may be somewhat inappropriate for somewhat non-discrete phenomenon like style. Taboada et al. (2011) , when validating their finegrained manual polarity lexicon (which included annotation of both polarity and strength), demonstrated that Mechanical Turk worker disagreement on a boolean task seemed to correspond fairly well to ranges on a scale: there was agreement at the extremes of polarity, but increasing disagreement towards the middle.", "cite_spans": [ { "start": 1441, "end": 1455, "text": "(Fleiss, 1971)", "ref_id": "BIBREF11" }, { "start": 1521, "end": 1522, "text": "1", "ref_id": null }, { "start": 1689, "end": 1716, "text": "(Artstein and Poesio, 2008)", "ref_id": "BIBREF0" }, { "start": 2098, "end": 2119, "text": "Taboada et al. (2011)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 1511, "end": 1518, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1543, "end": 1550, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "With this in mind, we used our initial annotations to create a new annotation task for two of our external annotators: the goal was to investigate whether annotators can identify relative differences in degree suggested by either agreement or disagreement with their choices by other annotators. First, we extracted minority opinions, defined here as word/style combinations where the annotator agreed with exactly one other annotator and disagreed with the three others, and consensus opinions, defined as those where all the annotators agreed. We randomly paired each minority opinion word/style with a consensus opinion; for both opinions, the annotator in question had made the same judgment (both yes, or both no), but some of the other annotators had made different choices. We then asked our annotators (who were unaware of the exact nature of the experiment) to pick, among two words they had tagged the same in the first round, the word which had 'more' of the relevant stylistic quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "In the negative case (where the annotator had originally marked both as not having the style), the results are stark: in 97% of the cases, the annotator picked the minority opinion (i.e. the word which some other annotators had marked yes), suggesting that the annotator could identify the stylistic tendencies of the (mixed-agreement) word, but had nonetheless excluded it, probably because there were much clearer examples of this style and other styles which could be more clearly applied to the word. In the positive case, the annotators preferred the word with group consensus 82.7% of the time, which is indeed the pattern we would predict if the minority opinion is less extreme; the positive case is more subtle than the negative case, where many of the words used for comparison very clearly do not belong to the relevant style. These results are consistent with the idea that disagreement is a rough indicator of degree, and that not all disagreement should be dismissed as noise or some other failure of annotation. Of course, this also indicates that relative or continous (e.g. Likert scale) judgments might be preferable to boolean ones, but in this case boolean annotation is far more practical, and indeed desirable for both model creation and evaluation. For our final seed set, our positive annotations include all word/style combinations where a majority of annotators marked yes, whereas our negative annotations include only terms where there was complete consensus; words where only 1 or 2 annotators marked yes were removed from consideration as seeds (for that particular style). The summary of the counts for main seed set are presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 1665, "end": 1672, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Word annotation", "sec_num": "3" }, { "text": "Our method for stylistic lexicon acquisition breaks down into three steps. The first is to apply one of several methods which leverages co-occurrence in a large corpus to derive, for each word, a raw score for each style. We then take that raw score and normalize it; the resulting number can be used directly to compare words relevant to a style. Finally, we consider the vector formed by these normalized style scores, and apply other methods which further refine this vector, implicitly taking into account the correlations among styles. The elements of the refined vector correspond to the degree of each style, so if we apply this method for all words in our vocabulary we create a fullcoverage lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "For all the methods in this section, we use the same corpus, the ICWSM Spinn3r 2009 dataset (Burton et al., 2009) , which has been used successfully in earlier work (Brooke et al., 2010) . Social media corpora are particularly appropriate for research on style, since they contain a variety of registers. Here, we include all 2.46 million texts in the Tier 1 portion which contained at least 100 word types. Hapax legomena were excluded, since they could not possibly offer any co-occurrence information, but otherwise we did not filter or lemmatize words: our full vocabulary is 1.95 million words.", "cite_spans": [ { "start": 92, "end": 113, "text": "(Burton et al., 2009)", "ref_id": "BIBREF7" }, { "start": 165, "end": 186, "text": "(Brooke et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "Our simplest method uses pointwise mutual information (PMI) (Church and Hanks, 1990 ), a popular metric for measuring the association between words. Since standard PMI has a lower bound of \u2212\u221e when the joint probability is 0 (a common occurrence since many of our words are relatively rare), we actually use a normalized version, NPMI, which has an upper bound of 1 and a lower bound of \u22121.", "cite_spans": [ { "start": 60, "end": 83, "text": "(Church and Hanks, 1990", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "NPMI(x, y) = log p(x, y) p(x)p(y) 1 log p(x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "Following earlier work (Brooke et al., 2010) , here and elsewhere we do not use the term frequency within a document (which is less relevant to style). Instead the probabilities are calculated using the number of documents where the word or words appear divided by the total number of documents. The raw score r i j for style i of word w j is simply the sum of its NPMI with the associated set of seeds S i :", "cite_spans": [ { "start": 23, "end": 44, "text": "(Brooke et al., 2010)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "r i j = \u2211 s\u2208S i NPMI(w j , s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "Our second method, LSA, was applied to formality by Brooke et al. (2010) and concreteness by Turney et al. (2011) . We begin by converting our corpus into a binary word-document matrix, and carry out latent semantic analysis (Landauer and Dumais, 1997), which includes a singular value decomposition of the matrix and dimensionality reduction to k dimensions. Assuming v w denotes the resulting k-dimensional vector for word w, we calculate r i j as:", "cite_spans": [ { "start": 52, "end": 72, "text": "Brooke et al. (2010)", "ref_id": "BIBREF6" }, { "start": 93, "end": 113, "text": "Turney et al. (2011)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "r i j = \u2211 s\u2208S i cos(\u03b8 (v w j , v s ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "Our third method, using latent Dirichlet allocation (Blei et al., 2003) , is more novel for lexical acquisition, and we address the specifics of this method in more detail in other work (Brooke and Hirst, 2013) . Briefly, LDA is a Bayesian topic model which assumes that texts are generated via a distribution of topics for each text (\u03b8 ), and a distribution of words for each topic (\u03b2 ); given a corpus, appropriate values for \u03b8 and \u03b2 are derived using inference, in this case variational Bayes inference using the original implementation provided by Blei et al. (2003) . Our method works by seeding each of six topics in an LDA model (corresponding to our six styles) by dividing the entire initial probability mass among the seeds and running two iterations of the model, which distributes some of the probability mass to co-occurring words. In our previous work, we found further iterations had no benefit and even slightly degraded the model. For the LDA method, r i j corresponds directly to \u03b2 i j of the resulting model which is just the probability of topic (style) i generating w j .", "cite_spans": [ { "start": 52, "end": 71, "text": "(Blei et al., 2003)", "ref_id": "BIBREF4" }, { "start": 186, "end": 210, "text": "(Brooke and Hirst, 2013)", "ref_id": "BIBREF5" }, { "start": 552, "end": 570, "text": "Blei et al. (2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus analysis", "sec_num": "4.1" }, { "text": "The raw numbers derived from corpus analysis methods discussed above cannot be used directly as indicators of style: the frequencies of both the seeds and the words being predicted have significant effect on the relative and absolute magnitudes of each style for all our methods, and performance using just these numbers is near chance. However, in two steps we can normalize these numbers to a form where the magnitude does directly reflect degree of a style. Again, r i j refers to the raw score for style i and word j from some corpus analysis method. First, we take steps to ensure that r i j is nonnegative. For LDA this is unnecessary (since r i j is based on a probability distribution), but for NPMI and LSA it is needed, since both involve summing over items which vary between \u22121 and 1. We can ensure that these are positive by adding a constant equal to the number of seeds. Next, we convert the result to a style 'distribution' for each word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization", "sec_num": "4.2" }, { "text": "r i j = r i j + |S i | \u2211 6 k=1 r k j + |S k |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization", "sec_num": "4.2" }, { "text": "The result is still not useful, since frequency (and count) of seeds clearly still has an effect. To focus on the differences between words, we subtract the means for each style and divide by the standard deviation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization", "sec_num": "4.2" }, { "text": "b i j = r i j \u2212 r i \u03c3 r i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization", "sec_num": "4.2" }, { "text": "to reach b i j , the base for the 'style space' methods in the next subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalization", "sec_num": "4.2" }, { "text": "Given a vector that represents the styles for a given word, we wish to refine the vector to improve performance on relative judgments for individual styles. Here, we test two options: the first transforms the stylistic vectors into k-Nearest Neighbor (kNN) graphs, where we can apply label propagation. The second option treats the vector as a set of features for supervised linear regression, one for each style, using a specialized loss function. Both methods rely on having a style vector representation of not only our target words, but also our seed (training) words. For LSA and NPMI, we used leave-one-out crossvalidation to create these vectors; for LDA, however, it was impractical to do a full run of the model for each word, and so we used 10-fold crossvalidation instead. A vector-space representation offers a number of obvious similarity functions for building a kNN graph: we test two here, inverse Euclidean distance (L2) and cosine similarity (cos). A more difficult problem is the choice of k (for kNN k): here, we estimate a good k from the training set. Since the training set and dimensionality of the data is (now) fairly small, we simply test on all possible intervals of 5, and choose the best (often near 50, though we saw values as low as 10 and as high as 90) using our pairwise evaluation (see Section 5.1). Since our label propagation method works independently for each style, we can choose a different k for each.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "For label propagation, we use the simple onestep propagation function from Kang et al. (2006) . Here, K is our similarity function (which returns zero if seed s is not one of the k nearest neighbors), and z i j is the resulting confidence score, which we use as our new estimate for the style:", "cite_spans": [ { "start": 75, "end": 93, "text": "Kang et al. (2006)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "z i j = \u2211 w s \u2208S i K(w j , w s )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "Obviously, the main work here is done by the similarity function, which implicitly includes information from other stylistic dimensions by preferring words which are close not just on the relevant dimension, but in the stylistic space as a whole. There are of course more sophisticated, multi-step approaches to label propagation, e.g. the one used by Rao and Ravichandran (2009) , but a single-step approach has clear advantages in light of our large vocabulary and dense graph; we leave exploration of whether unlabeled words can help further to fu-ture work. We did test the one-step correlated label propagation method proposed by Kang et al. but found it was ineffective, probably because it increases the effects of correlation, which is actually counter to our needs.", "cite_spans": [ { "start": 352, "end": 379, "text": "Rao and Ravichandran (2009)", "ref_id": null }, { "start": 635, "end": 646, "text": "Kang et al.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "The information provided by label propagation is distinct enough that it can be successfully combined with the original (base) vector. As with k for kNN, we estimated a good weighting for this combination using the training data, testing at 0.01 intervals. Since we noted some interdependence, we combined this step with the selection of (kNN) k. Again, this ratio can be different for each style.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "Our second vector optimization technique is an adaption of supervised linear regression. Linear regression usually involves minimizing squared distance of the output of the model from the training set, assuming there are known values of expected output. In this case, however, we don't have reliable values for specific degrees of a style. We proceed by replacing the least-squared loss function with a loss function based on our evaluation metric (see Section 5.1):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "L(\u03b8 ) = \u2211 w j \u2208S i,p \u2211 w m \u2208S i,n I(h \u03b8 (b i j ) < h \u03b8 (b im ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "Here, S i,p and S i,n refer to the positive and negative examples of style i, respectively, h \u03b8 is the linear regression function, and I is an indicator function equal to 1 if the statement is true, and 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "Using such a loss function discourages standard approaches to linear regression, but in this context (a small feature space and training set), it is reasonably practical to search the space exhaustively for weights which provide a (near-)optimal result (on the training data). 2 Starting with full weight (1) on the feature corresponding to the dimension being derived and 0 on all others, we search the range \u22121 to 1 at 0.001 intervals for the other dimensions, proceeding in order based on the greatest difference across positive and negative examples of each style. We found that one such iteration across each element of the vector was sufficient, resulting in a stable model. This method can be applied on the initial vector, or on a vector that has already been refined by some other method, i.e. the output of label propagation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Style Vector Optimization", "sec_num": "4.3" }, { "text": "Our evaluation is based on the pairwise comparison of words which are known (from our annotation) to differ relevant to a certain style. Accuracy for a test set S i (of a style i) is defined as the number of instances where the expected inequality exists between a pair of opposing words, divided by the total number of such pairings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "Accuracy(S i ) = \u2211 w j \u2208S i,p \u2211 w m \u2208S i,n I(z i j > z im ) |S i,p | \u2022 |S i,n |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "Here z can refer to any of the metrics for style discussed in the previous section. The major advantage of this definition of accuracy is that it does not require an arbitrary cutoff point, but 100% accuracy nonetheless indicates that the two sets are perfectly separable. Also, it does not assume anything about the degree of difference between two words, e.g. that more is better, since for any given pair of words we cannot be certain what an ideal difference would be. We evaluate using 3-fold crossvalidation, using the original 150-per-style annotation of our 900 words for the purposes of stratifying the data, which allows for balanced sets of 600 for training and 300 for testing. All seeding, training, and evaluation use the majority annotation of the 5 annotators, discussed in Section 3. Since the initial splits add a significant random factor, all results here are averaged over 5 runs, with the same 5 runs (i.e. same splits) used for all evaluated conditions. Table 3 shows a comparison of the performance of various models, organized by the method of corpus analysis. First, we note that most of these numbers are quite high, almost all are above 80% and most are above 90%. It is worth mentioning that if only direct opposites are considered (e.g. colloquial versus literary, concrete versus abstract), most dimensions reach results above 99%; our multi-style evaluation here offers a more realistic view. Among individual styles, colloquial words seem the most distinct, which is consistent with the results of human annotation. Acquisition of subjectivity, on the other hand, is strikingly more difficult than the other styles.", "cite_spans": [], "ref_spans": [ { "start": 977, "end": 984, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Setup", "sec_num": "5.1" }, { "text": "Based only on average accuracy, we could conclude that LSA > LDA > NPMI with respect Table 3 : Model performance in lexical induction of seeds, % pairwise accuracy. LP = label propagation, cos = cosine similarity, L2 = inverse Euclidean distance, LR = linear regression. Bold is best in column.", "cite_spans": [], "ref_spans": [ { "start": 85, "end": 92, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Comparison of models", "sec_num": "5.2" }, { "text": "to extracting relevant stylistic information from the corpus. That NPMI is the worst performing method is not surprising, since it relies only on direct co-occurrence between seeds and test words, and is not able to take advantage of larger patterns in the data; we would expect similar results for other simple relatedness measures. Though LSA is better overall, the distinction between LSA and LDA is more subtle, since in fact LDA is the higher performing model for two of the six styles, and its poorer overall performance can be attributed to a rather dismal showing for literary words, worse than NPMI. This is interesting because subjective and concrete words, where LDA does well, are the most common in the corpus, whereas literary words are consistently the least common. We posit, based on this and our earlier research focused on the LDA method, that successful low-dimensional seeded LDA requires styles (topics) that are reasonably well-represented in the corpus; when that condition is met, LDA will likely do better than LSA because it will distinguish rather than collapse correlated styles. LSA, on the other hand, is robust against the scarcity problem because it requires only that a set of words have a reasonably distinct k-dimensional profile to form a coherent style.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of models", "sec_num": "5.2" }, { "text": "Based on the results in Table 3 , we can conclude decisively that both of our optimization techniques are effective. The effects are particularly marked for NPMI, but is reasonably consistent across all three corpus analysis techniques and the various individual styles. With regards to the similarity function in label propagation, we found that cosine similarity, a less common choice for building graphs, was generally as good as, and often better than, Euclidean distance. The vector resulting from label propagation also consistently benefited from being combined with the base vector, the result being better than either alone. It is not entirely clear which of the two optimization methods is to be preferred (their effects seem roughly similar), though linear regression seems to have edge when using LSA. Combining the two methods seems a good strategy, particularly for LDA.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Comparison of models", "sec_num": "5.2" }, { "text": "The LSA results presented here mostly use k = 500, a fairly standard choice. However, we tested other values, in particular extremely low values (k = 20) to see if we could confirm our supposition (Brooke et al., 2010 ) that much stylistic information is contained with the first few dimensions of LSA. Our results suggest that the basic supposition is valid, since the difference between the two conditions for most dimensions is not large, but the identification of subjectivity (not considered by Brooke et al. 2010 ) does seem to benefit greatly from a higher-dimensional vector.", "cite_spans": [ { "start": 197, "end": 217, "text": "(Brooke et al., 2010", "ref_id": "BIBREF6" }, { "start": 500, "end": 518, "text": "Brooke et al. 2010", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of models", "sec_num": "5.2" }, { "text": "To investigate further the successes and failures of our method, we carried out two qualitative examinations of the output of our model. First, we looked at those words within our annotated set of words which consistently caused the most errors across the various splits and runs. Second, we ran a high-performing LSA model built from the entire seed set on a subset of our vocabulary (we excluded words of document frequency less than 100), creating lexicons for each style; we manually inspected non-seed words that were ranked highest on each dimension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "6" }, { "text": "The clearest result from the inspection of the seed output was that many of the false negatives involve words that are strong on some other dimension, typically on the other side of the oral/literate divide. For example, the most difficult-to-identify literary and abstract terms are strongly subjective (e.g. loathe and obscene), while the most difficult objective word, translucent, is very concrete. The most difficult concrete words are literary (yoke, raiment) or objective (conflagration), and the most difficult subjective words are also somewhat objective (eminent) or abstract (autocratic). Interestingly, a manual inspection of the weights for linear regression suggests that our optimization is correcting for just this kind of situation: we generally see negative weights on (what we would predict to be) positively correlated styles, and vice versa. However, in certain cases where one style has a much larger role in determining the co-occurrence pattern in the corpus, this correction may be insufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "6" }, { "text": "Most of the false positives, by contrast, involve overextension of each category in predictable ways. For example, our highest ranking literary words from the general vocabulary were mostly very good, but contained a few words that are obvious over generalizations into biblical and fantasy texts, e.g. locust and sorcerers, while among the objective words there were a number of academiarelevant words that are really more abstract than objective, e.g. coauthors and peer-review. Our derived colloquial words contained many (sometimes purposeful) misspellings (wayy, annnnd) which we could argue are genuinely colloquial; less clear are the many lower-case celebrity names (e.g. miley), but the fact that the bloggers used lower case does make them non-standard. Consistent with our qualitative results, subjective was the most problematic in the general vocabulary: though there were many good subjective words, there were a lot of other words which suggest topics that people tend to express opinions about, e.g. sitcoms, entertainer, or flick; movie-related words are particularly common, which might be a reflection the lexicon we took our subjective seeds from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "6" }, { "text": "We have presented a methodology for deriving high-quality stylistic lexicons from corpora. A key aspect of our approach its hybrid nature: information is first extracted (using efficient, wellestablished methods) in a semi-supervised fashion from large corpora, and then refined using fullysupervised techniques. We argue that there are clear benefits in looking at multiple styles simultaneously, not only in terms of improving performance but also in taking our evaluation beyond 'toy' situations where we ignore the complexities and interactions among styles, drawing connections with broader insights from linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "One possible criticism of our method is that we use only co-occurrence information, and not other information (e.g. word morphology) which could be relevant to particular styles in English; this option should be explored further, particularly in the optimization phase where we can easily add other features, though we stress that our ultimate goal is to derive methods that are easily extensible to more styles and more languages. We have also not considered word senses or multiword expressions, but both can and should be added to the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The annotations and our guidelines are available at http://cs.toronto.edu/ \u223c jbrooke/style annotations.zip .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "At the suggestion of a reviewer, we also tried applying SVMrank to this regression; it was much faster but performance was worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Natural Sciences and Engineering Research Council of Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Inter-coder agreement for computational linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "555--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining", "authors": [ { "first": "Stefano", "middle": [], "last": "Baccianella", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Esuli", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2010, "venue": "Proceedings of LREC'10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of LREC'10, Valletta, Malta.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Register, Genre, and Style", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Conrad", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Biber and Susan Conrad. 2009. Register, Genre, and Style. Cambridge University Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Variation Across Speech and Writing", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Biber. 1988. Variation Across Speech and Writing. Cambridge University Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent Dirichlet allo- cation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A multidimensional Bayesian approach to lexical style", "authors": [ { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NAACL '13", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Brooke and Graeme Hirst. 2013. A multi- dimensional Bayesian approach to lexical style. In Proceedings of NAACL '13, Atlanta.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic acquisition of lexical formality", "authors": [ { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian Brooke, Tong Wang, and Graeme Hirst. 2010. Automatic acquisition of lexical formality. In Pro- ceedings of COLING '10, Beijing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The ICWSM 2009 Spinn3r Dataset", "authors": [ { "first": "Kevin", "middle": [], "last": "Burton", "suffix": "" }, { "first": "Akshay", "middle": [], "last": "Java", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Soboroff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ICWSM '09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Burton, Akshay Java, and Ian Soboroff. 2009. The ICWSM 2009 Spinn3r Dataset. In Proceedings of ICWSM '09, San Jose.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word association norms, mutual information, and lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "Computational Linguistics", "volume": "16", "issue": "1", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Predicting reading difficulty with statistical language models", "authors": [ { "first": "Kevyn", "middle": [], "last": "Collins-Thompson", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2005, "venue": "Journal of the American Society for Information Science Technology", "volume": "56", "issue": "13", "pages": "1448--1462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevyn Collins-Thompson and Jamie Callan. 2005. Predicting reading difficulty with statistical lan- guage models. Journal of the American Society for Information Science Technology, 56(13):1448- 1462.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "MRC Psycholinguistic Database User Manual: Version 1", "authors": [ { "first": "Max", "middle": [], "last": "Coltheart", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Max Coltheart. 1980. MRC Psycholinguistic Database User Manual: Version 1. Birkbeck College.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "Joseph", "middle": [ "L" ], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological Bulletin", "volume": "76", "issue": "5", "pages": "378--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bul- letin, 76(5):378-382.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Pragmatics and natural language generation", "authors": [ { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 1990, "venue": "Artificial Intelligence", "volume": "43", "issue": "", "pages": "153--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard H. Hovy. 1990. Pragmatics and natural lan- guage generation. Artificial Intelligence, 43:153- 197.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Five Clocks. Harcourt, Brace and World", "authors": [ { "first": "Martin", "middle": [], "last": "Joos", "suffix": "" } ], "year": 1961, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Joos. 1961. The Five Clocks. Harcourt, Brace and World, New York.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Oxford Guide to Writing", "authors": [ { "first": "Thomas", "middle": [ "S" ], "last": "Kane", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas S. Kane. 1983. The Oxford Guide to Writing. Oxford Univeristy Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Correlated label propagation with application to multi-label learning", "authors": [ { "first": "Feng", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Sukthankar", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CVPR '06", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Kang, Rong Jin, and Rahul Sukthankar. 2006. Correlated label propagation with application to multi-label learning. In Proceedings of CVPR '06, New York.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic detection of text genre", "authors": [ { "first": "Brett", "middle": [], "last": "Kessler", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Nunberg", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ACL '97", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brett Kessler, Geoffrey Nunberg, and Hinrich Sch\u00fctze. 1997. Automatic detection of text genre. In Pro- ceedings of ACL '97, Madrid.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Statistical estimation of word acquisition with application to readability prediction", "authors": [ { "first": "Paul", "middle": [], "last": "Kidwell", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lebanon", "suffix": "" }, { "first": "Kevyn", "middle": [], "last": "Collins-Thompson", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Kidwell, Guy Lebanon, and Kevyn Collins- Thompson. 2009. Statistical estimation of word ac- quisition with application to readability prediction. In Proceedings of EMNLP'09, Singapore.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge", "authors": [ { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas K. Landauer and Susan Dumais. 1997. A so- lution to Plato's problem: The latent semantic anal- ysis theory of the acquisition, induction, and rep- resentation of knowledge. Psychological Review, 104:211-240.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Language and Context: A Functional Linguistic Theory of Register", "authors": [ { "first": "Helen", "middle": [], "last": "Leckie-Tarry", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen Leckie-Tarry. 1995. Language and Context: A Functional Linguistic Theory of Register. Pinter.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Word space modeling for measuring semantic specificity in Chinese", "authors": [ { "first": "Shu-Kai", "middle": [], "last": "Ching-Fen Pan", "suffix": "" }, { "first": "", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ching-Fen Pan and Shu-Kai Hsieh. 2010. Word space modeling for measuring semantic specificity in Chi- nese. In Proceedings of COLING '10, Beijing.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Email formality in the workplace: A case study on the Enron corpus", "authors": [ { "first": "Kelly", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Hohensee", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL '11", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelly Peterson, Matt Hohensee, and Fei Xia. 2011. Email formality in the workplace: A case study on the Enron corpus. In Proceedings of ACL '11, Port- land.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semisupervised polarity lexicon induction", "authors": [ { "first": "Delip", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Deepak", "middle": [], "last": "Ravichandra", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL '09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delip Rao and Deepak Ravichandra. 2009. Semi- supervised polarity lexicon induction. In Proceed- ings of EACL '09, Athens.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media generations", "authors": [ { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL '11", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media gener- ations. In Proceedings of ACL '11, Portland.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Elements of Style. Macmillan", "authors": [ { "first": "William", "middle": [], "last": "Strunk", "suffix": "" }, { "first": "E", "middle": [ "B" ], "last": "White", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Strunk and E.B. White. 1979. The Elements of Style. Macmillan, 3rd edition.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Lexiconbased methods for sentiment analysis", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Brooke", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Tofiloski", "suffix": "" }, { "first": "Kimberly", "middle": [], "last": "Voll", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "2", "pages": "267--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon- based methods for sentiment analysis. Computa- tional Linguistics, 37(2):267-307.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Measuring praise and criticism: Inference of semantic orientation from association", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Littman", "suffix": "" } ], "year": 2003, "venue": "ACM Transactions on Information Systems", "volume": "21", "issue": "", "pages": "315--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney and Michael Littman. 2003. Measuring praise and criticism: Inference of semantic orienta- tion from association. ACM Transactions on Infor- mation Systems, 21:315-346.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Literal and metaphorical sense identification through concrete and abstract context", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Yair", "middle": [], "last": "Turney", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Neuman", "suffix": "" }, { "first": "Yohai", "middle": [], "last": "Assaf", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP '11", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of EMNLP '11, Edinburgh, United Kingdom.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The viability of web-derived polarity lexicons", "authors": [ { "first": "Leonid", "middle": [], "last": "Velikovich", "suffix": "" }, { "first": "Sasha", "middle": [], "last": "Blair-Goldensohn", "suffix": "" }, { "first": "Kerry", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL '10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan McDonald. 2010. The viabil- ity of web-derived polarity lexicons. In Proceedings of NAACL '10, Los Angeles.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP '05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoff- mann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of HLT/EMNLP '05, Vancouver.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "text": "Fleiss's kappa for 5-way annotation, by style.", "type_str": "table", "content": "
StyleKappa
Literary0.61
Abstract0.37
Objective0.55
Colloquial0.85
Concrete0.67
Subjective0.63
Average0.61
", "num": null }, "TABREF1": { "html": null, "text": "Number of seeds, by style.", "type_str": "table", "content": "
StylePositive Negative
Literary132660
Abstract107599
Objective245495
Colloquial163684
Concrete190572
Subjective258487
", "num": null } } } }