{ "paper_id": "I08-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:42:10.496454Z" }, "title": "Using Roget's Thesaurus for Fine-grained Emotion Recognition", "authors": [ { "first": "Saima", "middle": [], "last": "Aman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa", "location": { "settlement": "Ottawa", "country": "Canada" } }, "email": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa", "location": { "settlement": "Ottawa", "country": "Canada, Poland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recognizing the emotive meaning of text can add another dimension to the understanding of text. We study the task of automatically categorizing sentences in a text into Ekman's six basic emotion categories. We experiment with corpus-based features as well as features derived from two emotion lexicons. One lexicon is automatically built using the classification system of Roget's Thesaurus, while the other consists of words extracted from WordNet-Affect. Experiments on the data obtained from blogs show that a combination of corpus-based unigram features with emotion-related features provides superior classification performance. We achieve Fmeasure values that outperform the rulebased baseline method for all emotion classes.", "pdf_parse": { "paper_id": "I08-1041", "_pdf_hash": "", "abstract": [ { "text": "Recognizing the emotive meaning of text can add another dimension to the understanding of text. We study the task of automatically categorizing sentences in a text into Ekman's six basic emotion categories. We experiment with corpus-based features as well as features derived from two emotion lexicons. One lexicon is automatically built using the classification system of Roget's Thesaurus, while the other consists of words extracted from WordNet-Affect. Experiments on the data obtained from blogs show that a combination of corpus-based unigram features with emotion-related features provides superior classification performance. We achieve Fmeasure values that outperform the rulebased baseline method for all emotion classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recognizing emotions conveyed by a text can provide an insight into the author's intent and sentiment, and can lead to better understanding of the text's content. Emotion recognition in text has recently attracted increased attention of the NLP community (Alm et al., 2005; Liu et al, 2003; Mihalcea and Liu, 2006) ; it is also one of the tasks at Semeval-2007 1 .", "cite_spans": [ { "start": 255, "end": 273, "text": "(Alm et al., 2005;", "ref_id": "BIBREF0" }, { "start": 274, "end": 290, "text": "Liu et al, 2003;", "ref_id": "BIBREF9" }, { "start": 291, "end": 314, "text": "Mihalcea and Liu, 2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Automatic recognition of emotions can be applied in the development of affective interfaces for 1 Computer-Mediated Communication and Human-Computer Interaction. Other areas that can potentially benefit from automatic emotion analysis are personality modeling and profiling (Liu and Maes, 2004) , affective interfaces and communication systems (Liu et al, 2003; Neviarouskaya et al., 2007a) consumer feedback analysis, affective tutoring in e-learning systems (Zhang et al., 2006) , and textto-speech synthesis (Alm et al., 2005) .", "cite_spans": [ { "start": 96, "end": 97, "text": "1", "ref_id": null }, { "start": 274, "end": 294, "text": "(Liu and Maes, 2004)", "ref_id": "BIBREF10" }, { "start": 344, "end": 361, "text": "(Liu et al, 2003;", "ref_id": "BIBREF9" }, { "start": 362, "end": 390, "text": "Neviarouskaya et al., 2007a)", "ref_id": "BIBREF14" }, { "start": 460, "end": 480, "text": "(Zhang et al., 2006)", "ref_id": "BIBREF22" }, { "start": 511, "end": 529, "text": "(Alm et al., 2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we address the task of automatically assigning an emotion label to each sentence in the given dataset, indicating the predominant emotion type expressed in the sentence. The possible labels are happiness, sadness, anger, disgust, surprise, fear and no-emotion. Those are Ekman's (1992) six basic emotion categories, and an additional label to account for the absence of a clearly discernible emotion.", "cite_spans": [ { "start": 286, "end": 300, "text": "Ekman's (1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We experiment with two types of features for representing text in emotion classification based on machine learning (ML). Features of the first type are a corpus-based unigram representation of text. Features of the second type comprise words that appear in emotion lexicons. One such lexicon consists of words that we automatically extracted from Roget's Thesaurus (1852). We chose words for their semantic similarity to a basic set of terms that represent each emotion category. Another lexicon builds on lists of words for each emotion category, extracted from WordNet-Affect (Strapparava and Valitutti, 2004) .", "cite_spans": [ { "start": 578, "end": 611, "text": "(Strapparava and Valitutti, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We compare the classification results for groups of features of these two types. We get good results when the features are combined in a series of ML experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research in emotion recognition has focused on discerning emotions along the dimensions of valence (positive / negative) and arousal (calm / excited), and on recognizing distinct emotion categories. We focus on the latter. Liu et al. (2003) use a real-world commonsense knowledge base to classify sentences into Ekman's (1992) basic emotion categories. They use an ensemble of rule-based affect models to determine the emotional affinity of individual sentences. Neviarouskaya et al. (2007b) also use rules to determine the emotions in sentences in blog posts; their analysis relies on a manually prepared database of words, abbreviations and emoticons labeled with emotion categories.", "cite_spans": [ { "start": 223, "end": 240, "text": "Liu et al. (2003)", "ref_id": "BIBREF9" }, { "start": 312, "end": 326, "text": "Ekman's (1992)", "ref_id": "BIBREF3" }, { "start": 463, "end": 491, "text": "Neviarouskaya et al. (2007b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since these papers do not report conventional performance metrics such as precision and recall, the effectiveness of their methods cannot be judged empirically. They also disregard statistical learning methods as ineffective for emotion recognition at sentence level. They surmise that the small size of the text input (a sentence) gives insufficient data for statistical analysis, and that statistical methods cannot handle negation. In this paper, we show that ML-based approach with the appropriate combination of features can be applied to distinguishing emotions in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous work has used lexical resources such as WordNet to automatically acquire emotion-related words for emotion classification experiments. Starting from a set of primary emotion adjectives, Alm et al. (2005) retrieve similar words from WordNet utilizing all senses of all words in the synsets that contain the adjectives. They also exploit the synonym and hyponym relations in WordNet to manually find words similar to nominal emotion words. Kamps and Marx (2002) use WordNet's synset relations to determine the affective meaning of words. They assign multidimensional scores to individual words based on the minimum path length between them and a pair of polar words (such as \"good\" and \"bad\") in WordNet's structure.", "cite_spans": [ { "start": 195, "end": 212, "text": "Alm et al. (2005)", "ref_id": "BIBREF0" }, { "start": 447, "end": 468, "text": "Kamps and Marx (2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There is also a corpus-driven method of determining the emotional affinity of words: learn prob-abilistic affective scores of words from large corpora. Mihalcea and Liu (2006) have used this method to assign a happiness factor to words depending on the frequency of their occurrences in happy-labeled blogposts compared to their total frequency in the corpus.", "cite_spans": [ { "start": 152, "end": 175, "text": "Mihalcea and Liu (2006)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this paper, we study a new approach to automatically acquiring a wide variety of words that express emotions or emotion-related concepts, using Roget's Thesaurus (1852).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We have based our study on data collected from blogs. We chose blogs as data source because they are potentially rich in emotion content, and contain good examples of real-world instances of emotions expressed in text. Additionally, text in blogs does not conform to the style of any particular genre per se, and thus offers a variety in writing styles, choice and combination of words, as well as topics. So, the methods learned for discerning emotion using blog data are quite general and therefore applicable to a variety of genres rather than to blogs only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion-Labeled Data", "sec_num": "3" }, { "text": "We retrieved blogs using seed words for all emotion categories. Four human judges manually annotated the blog posts with emotion-related information -every sentence received two judgments. The annotators were required to mark each sentence with one of the eight labels: happiness, sadness, anger, disgust, surprise, fear, mixed-emotion, and no-emotion. The mixedemotion label was included to handle those sentences that had more than one type of emotion or whose emotion content could not fit into any of the given emotion categories. Sample sentences from the annotated corpus are shown in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 591, "end": 597, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Emotion-Labeled Data", "sec_num": "3" }, { "text": "We measured the inter-annotator agreement using Cohen's (1960) kappa. The average pair-wise agreement for different emotion categories ranged from 0.6 to 0.79. In the experiments reported in this paper, we use only those sentences for which there was agreement between both judgments (to form a benchmark for the evaluation of the results of automatic classification). The distribution of emotion categories in the corpus used in our experiments is shown in ", "cite_spans": [ { "start": 48, "end": 62, "text": "Cohen's (1960)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion-Labeled Data", "sec_num": "3" }, { "text": "We are interested in investigating if emotion in text can be discerned on the basis of its lexical content. A na\u00efve approach to determining the emotional orientation of text is to look for obvious emotion words, such as \"happy\", \"afraid\" or \"astonished\". The presence of one or more words of a particular emotion category in a sentence provides a good premise for interpreting the overall emotion of the sentence. This approach relies on a list of words with prior information about their emotion type, and uses it for sentence-level classification. The obvious advantage is that no training data are required. For evaluation purposes, we took this approach to develop a baseline system that counts the number of emotion words of each category in a sentence, and then assigns this sentence the category with the largest number of words. Ties were resolved by choosing the emotion label according to an arbitrarily predefined ordering of emotion classes. A sentence containing no emotion word of any type was assigned the no emotion category. This system worked with word lists 2 extracted 2 Emotion words from WordNet-Affect (http://www.cse.unt.edu/~rada/affectivetext/data/WordNet AffectEmotionLists.tar.gz) from WordNet-Affect (Strapparava and Valitutti, 2004) for six basic emotion categories. Table 2 shows the precision, recall, and Fmeasure values for the baseline system. As we have seven classes in our experiments, the class imbalance makes accuracy values less relevant than precision, recall and F-measure. That is why we do not report accuracy values in our results.", "cite_spans": [ { "start": 1229, "end": 1262, "text": "(Strapparava and Valitutti, 2004)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 1297, "end": 1304, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "A Baseline Approach", "sec_num": "4" }, { "text": "The baseline system shows precision values above 50% for all but two classes. This shows the usefulness of this approach. This method, however, fails in the absence of obvious emotion words in the sentence, as indicated by low recall values. Thus, in order to improve recall, we need to increase the ambit of words that are considered emotion-related. An alternative approach is to use ML to learn automatically rules that classify emotion in text. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Baseline Approach", "sec_num": "4" }, { "text": "We study two types of features: corpus-based features and features based on emotion lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach Based on Machine Learning", "sec_num": "5" }, { "text": "The corpus-based features exploit the statistical characteristics of the data on the basis of the ngram distribution. In our experiments, we take unigrams (n=1) as features. Unigram models have been previously shown to give good results in sentiment classification tasks (Kennedy and Inkpen, 2006; Pang et al., 2002) : unigram representations can capture a variety of lexical combinations and distributions, including those of emotion words. This is particularly important in the case of blogs, whose language is often characterized by frequent use of new words, acronyms (such as \"lol\"), onomatopoeic words (\"haha\", \"grrr\"), and slang, most of which can be captured in a unigram representa-This was the best summer I have ever experienced. (happiness) I don't feel like I ever have that kind of privacy where I can talk to God and cry and figure things out.", "cite_spans": [ { "start": 271, "end": 297, "text": "(Kennedy and Inkpen, 2006;", "ref_id": "BIBREF8" }, { "start": 298, "end": 316, "text": "Pang et al., 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus-based features", "sec_num": "5.1" }, { "text": "(sadness) Finally, I got fed up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus-based features", "sec_num": "5.1" }, { "text": "(disgust) I can't believe she is finally here! (surprise) Fig 1. Sample sentences from the corpus tion. Another advantage of a unigram representation is that it does not require any prior knowledge about the data under investigation or the classes to be identified. For our experiments, we selected all unigrams that occur more than three times in the corpus. This eliminates rare words, as well as foreign-language words and spelling mistakes, which are quite common in blogs. We also excluded words that occur in a list of stopwords -primarily function words that do not generally have emotional connotations. We used the SMART list of stopword 3 , with minor modifications. For instance, we removed from the stop list words such as \"what\" and \"why\", which may be used in the context of expressing surprise.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 64, "text": "Fig 1.", "ref_id": null } ], "eq_spans": [], "section": "Corpus-based features", "sec_num": "5.1" }, { "text": "We utilized Roget's Thesaurus (Jarmasz and Szpakowicz, 2001) to automatically build a lexicon 3 SMART stopwords list. Used with the SMART information retrieval system at Cornell University (ftp://ftp.cs.cornell.edu/pub/smart/english.stop) of emotion-related words. The features based on an emotion lexiconrequire prior knowledge about emotion relatedness of words. We extracted this knowledge from the classification system in Roget's, which groups related concepts into various levels of a hierarchy. For a detailed account of this classification structure, see Jarmasz and Szpakowicz (2001) .", "cite_spans": [ { "start": 30, "end": 60, "text": "(Jarmasz and Szpakowicz, 2001)", "ref_id": "BIBREF4" }, { "start": 563, "end": 592, "text": "Jarmasz and Szpakowicz (2001)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Features derived from Roget's Thesaurus", "sec_num": "5.2" }, { "text": "Roget's structure allows the calculation of semantic relatedness between words, based on the path length between the nodes in the structure that represent those words. In case of multiple paths, the shortest path is considered. Jarmasz and Szpakowicz (2004) have introduced a similarity measure derived from path length, which assigns scores ranging from a maximum of 16 to most semantically related words to a minimum of 0 to least related words. They have shown that on semantic similarity tests this measure outperforms several other methods. To build a lexicon of emotion-related words utilizing Roget's structure, we need first to make two decisions: select a primary set of emotion words starting with which we can extract other similar Table 3 . Emotion-related words automatically extracted from Roget's Thesaurus words, and choose an appropriate similarity score to serve as cutoff for determining semantic relatedness between words. The primary set of words that we selected consists of one word for each emotion category, representing the base form of the name of the category: {happy, sad, anger, disgust, surprise, fear}.", "cite_spans": [], "ref_spans": [ { "start": 743, "end": 750, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Features derived from Roget's Thesaurus", "sec_num": "5.2" }, { "text": "Experiments performed on Miller and Charles similarity data (1991), reported in Jarmasz and Szpakowicz (2004) , have shown that pairs of words with a semantic similarity value of 16 have high similarity, while those with a score of 12 to 14 have intermediate similarity. Therefore, we select the score of 12 as cutoff, and include in the lexicon all words that have similarity scores of 12 or higher with respect to the words in the primary set. This selection of cutoff therefore serves as a form of feature selection. In Table 3 , we present sample words from the lexicon with similarity scores of 16, 14, and 12 for each emotion category. These words represent three different levels of relatedness to each emotion category. We are able to identify a large variety of emotion-related words belonging to different parts of speech that go well beyond the stereotypical words associated with different emotions. We particularly note some generic neutral words, such as \"feel\", \"life\", and \"times\" associated with many emotion categories, indicating their conceptual relevance to emotions.", "cite_spans": [ { "start": 80, "end": 109, "text": "Jarmasz and Szpakowicz (2004)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 523, "end": 530, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Features derived from Roget's Thesaurus", "sec_num": "5.2" }, { "text": "WordNet-Affect is an affective lexical resource that assigns a variety of affect-related labels to a subset of WordNet synsets comprising affective concepts. We used lists of words extracted from it for each of the six emotion categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features derived from WordNet-Affect", "sec_num": "5.3" }, { "text": "We train classifiers with unigram features for each emotion class using Support Vector Machine (SVM) for predicting the emotion category of the sentences in our corpus. SVM has been shown to be useful for text classification tasks (Joachims, 1998) , and has previously given good performance in sentiment classification experiments (Kennedy and Inkpen, 2006; Mullen and Collier, 2004; Pang and Lee, 2004; Pang et al., 2002) . In Table 4 , we report results from ten-fold cross-validation experiments conducted using the SMO implementation of SVM in Weka (Witten and Frank, 2005) . In each experiment, we represent a sentence by a vector indicating the number of times each feature occurs.", "cite_spans": [ { "start": 231, "end": 247, "text": "(Joachims, 1998)", "ref_id": null }, { "start": 332, "end": 358, "text": "(Kennedy and Inkpen, 2006;", "ref_id": "BIBREF8" }, { "start": 359, "end": 384, "text": "Mullen and Collier, 2004;", "ref_id": null }, { "start": 385, "end": 404, "text": "Pang and Lee, 2004;", "ref_id": "BIBREF17" }, { "start": 405, "end": 423, "text": "Pang et al., 2002)", "ref_id": "BIBREF18" }, { "start": 554, "end": 578, "text": "(Witten and Frank, 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 429, "end": 436, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "In the first experiment, we use only corpusbased unigram features. We obtain high precision values for all emotion classes (as shown in Table 4 ), and the recall and F-measure values surpass baseline values for all classes except no-emotion. This validates our premise that unigrams can help learn lexical distributions well to accurately predict emotion categories.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 144, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "Next, we use as features all words in the emotion lexicon acquired from Roget's Thesaurus (RT). The F-measure scores beat the baseline for four out of seven classes. When we combine both corpus-based unigrams with RT features, we can increase recall values across all seven classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "Finally, we add features from WordNet-Affect to the feature set containing corpus unigrams and RT features. This leads to further improvement in overall performance. Combining all features, we achieve highest recall values across all but one class. The resulting F-measure values (ranging from 0.493 to 0.751) surpass the baseline values across all seven classes. This increase was found to be statistically significant (paired t-test, p=0.05).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "6" }, { "text": "We observe that corpus-based features and emotion-related features together contribute to improved performance, better than given by any one type of feature group alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Any automatic way of recognizing emotion should inevitably take into account a wide variety of words that are semantically connected to emotions. While some words are obviously affective, many more are only potentially affective. The latter derive their affective property from their associations with emotional concepts. For instance, words like \"family\", \"friends\", \"home\" are not inherently emotional, but because of their wellknown semantic association with emotion concepts, their presence in a sentence can be taken as an indicator of emotion expression in the sentence. We can interpret the results as indicators of how much correlation the classifiers can find between the features and the predicted class. Considering our best results using all features, we find that this correlation is highest for the \"happy\" class, indicated by a precision of 0.813 and recall of 0.698, the highest among all classes. We can therefore conclude that it is easier to discern happiness in text than Ekman's other basic emotions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Working on a corpus of blog sentences annotated with emotion labels, we were able to demonstrate that a combination of corpus-based unigram features and features derived from emotion lexicons can help automatically distinguish basic emotion categories in written text. When used together in an SVM-based learning environment, these features increased recall in all cases and the resulting F-measure values significantly surpassed the baseline scores for all emotion categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "In addition, we described a method of building an emotion lexicon derived from Roget's Thesaurus on the basis of semantic relatedness of words to a set of basic emotion words for each emotion category. The effectiveness of this emotion lexicon was demonstrated in the emotion classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emotions from text: machine learning for text-based emotion prediction", "authors": [ { "first": "Cecilia", "middle": [ "O" ], "last": "Alm", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Joint Conference on HLT/EMNLP", "volume": "", "issue": "", "pages": "579--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cecilia O. Alm, Dan Roth, and Richard Sproat, Emo- tions from text: machine learning for text-based emo- tion prediction. In Proceedings of Joint Conference on HLT/EMNLP, pages 579-586, Vancouver, Can- ada, Oct 2005.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Affective norms for English words (ANEW): Instruction manual and affective ratings", "authors": [ { "first": "M", "middle": [ "M" ], "last": "Bradley", "suffix": "" }, { "first": "P", "middle": [ "J" ], "last": "Lang", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.M. Bradley and P.J. Lang, Affective norms for Eng- lish words (ANEW): Instruction manual and affective ratings, Technical Report C-1, The Center for Re- search in Psychophysiology, University of Florida, 1999.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and Psychological Measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cohen, A coefficient of agreement for nominal scales, Educational and Psychological Measurement, 1960, 20 (1): 37-46.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An Argument for Basic Emotions, Cognition and Emotion", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "", "volume": "6", "issue": "", "pages": "169--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Ekman, An Argument for Basic Emotions, Cogni- tion and Emotion, 6, 1992, 169-200.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Design and Implementation of an Electronic Lexical Knowledge Base", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 14th Biennial Conference of the Canadian Society for Computational Studies of Intelligence", "volume": "", "issue": "", "pages": "325--333", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz, The Design and Implementation of an Electronic Lexical Knowledge Base. In Proceedings of the 14th Biennial Confer- ence of the Canadian Society for Computational Studies of Intelligence (AI 2001), Ottawa, Canada, June 2001, 325-333.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Amsterdam/Philadelphia, Current Issues in Linguistic Theory", "authors": [ { "first": "Mario", "middle": [], "last": "Jarmasz", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2004, "venue": "Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, John Benjamins", "volume": "260", "issue": "", "pages": "111--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Jarmasz and Stan Szpakowicz, Roget's Thesaurus and Semantic Similarity. N. Nicolov, K. Bontcheva, G. Angelova, R. Mitkov (eds.) Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, John Benjamins, Amster- dam/Philadelphia, Current Issues in Linguistic The- ory, 260, 2004, 111-120.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Text categorization with support vector machines: Learning with many relevant features", "authors": [ { "first": "Thorsten", "middle": [], "last": "Joachims", "suffix": "" } ], "year": null, "venue": "Proceedings of the European Conference on Machine Learning (ECML-98)", "volume": "", "issue": "", "pages": "137--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Joachims, Text categorization with support vector machines: Learning with many relevant fea- tures. In Proceedings of the European Conference on Machine Learning (ECML-98), pages 137-142.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Words with attitude", "authors": [ { "first": "Jaap", "middle": [], "last": "Kamps", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Marx", "suffix": "" }, { "first": "Robert", "middle": [ "J" ], "last": "Mokken", "suffix": "" }, { "first": "Marten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 1st International Conference on Global Word-Net", "volume": "", "issue": "", "pages": "332--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaap Kamps, Maarten Marx, Robert J. Mokken, and Marten de Rijke, Words with attitude, In Proceedings of the 1st International Conference on Global Word- Net, pages 332-341, Mysore, India, 2002.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sentiment Classification of Movie Reviews Using Contextual Valence Shifters", "authors": [ { "first": "Alistair", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2006, "venue": "Computational Intelligence", "volume": "22", "issue": "2", "pages": "110--125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair Kennedy and Diana Inkpen, Sentiment Classifi- cation of Movie Reviews Using Contextual Valence Shifters. Computational Intelligence, 2006, 22(2):110-125.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A model of textual affect sensing using real-world knowledge", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Selker", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ACM Conference on Intelligent User Interfaces", "volume": "", "issue": "", "pages": "125--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Liu, Henry Lieberman, and Ted Selker, A model of textual affect sensing using real-world knowledge. In Proceedings of the ACM Conference on Intelligent User Interfaces, 2003, 125-132.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "What Would They Think? A Computational Model of Attitudes", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "P", "middle": [], "last": "Maes", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACM International Conference on Intelligent User Interfaces, IUI", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hugo Liu, and P. Maes, What Would They Think? A Computational Model of Attitudes. In Proceedings of the ACM International Conference on Intelligent User Interfaces, IUI 2004, 38-45, ACM Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A corpus-based approach to finding happiness", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the AAAI Spring Symposium on Computational Approaches for Analysis of Weblogs", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Hugo Liu, A corpus-based approach to finding happiness, In Proceedings of the AAAI Spring Symposium on Computational Approaches for Analysis of Weblogs, Stanford, CA, USA, March 2006.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Contextual correlates of semantic similarity. Language and Cognitive Processes", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" }, { "first": "W", "middle": [], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "", "volume": "6", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Miller and W. Charles. Contextual correlates of se- mantic similarity. Language and Cognitive Proc- esses, 6(1):1-28, 1991.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Sentiment analysis using support vector machines with diverse information sources", "authors": [ { "first": "T", "middle": [], "last": "Mullen", "suffix": "" }, { "first": "", "middle": [], "last": "Collier", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP-2004)", "volume": "", "issue": "", "pages": "412--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "T Mullen and N Collier. Sentiment analysis using sup- port vector machines with diverse information sources. In Dekang Lin and Dekai Wu, editors, Pro- ceedings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP-2004), pages 412-418, Barcelona, Spain.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Analysis of affect expressed through the evolving language of online communication", "authors": [ { "first": "Alena", "middle": [], "last": "Neviarouskaya", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th International Conference on Intelligent User Interfaces (IUI-07)", "volume": "", "issue": "", "pages": "278--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. Analysis of affect expressed through the evolving language of online communication. In Pro- ceedings of the 12th International Conference on In- telligent User Interfaces (IUI-07), pages 278-281, Honolulu, Hawaii, USA, 2007a.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Narrowing the Social Gap among People involved in Global Dialog: Automatic Emotion Detection in Blog Posts", "authors": [ { "first": "Alena", "middle": [], "last": "Neviarouskaya", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the International Conference on Weblogs and Social Media (ICWSM 2007)", "volume": "", "issue": "", "pages": "293--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka, Narrowing the Social Gap among People involved in Global Dialog: Automatic Emotion De- tection in Blog Posts, In Proceedings of the Interna- tional Conference on Weblogs and Social Media (ICWSM 2007), pages 293-294, Boulder, CO, USA, March 2007b.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The cognitive structure of emotions", "authors": [ { "first": "A", "middle": [], "last": "Ortony", "suffix": "" }, { "first": "G", "middle": [ "L" ], "last": "Clore", "suffix": "" }, { "first": "A", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ortony, G.L. Clore, and A. Collins, The cognitive structure of emotions. New York: Cambridge Uni- versity Press, 1988", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04)", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee, 2004. A Sentimental Educa- tion: Sentiment Analysis Using Subjectivity Summa- rization Based on Minimum Cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04), Barcelona, Spain, pages 271-278.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Thumbs up? Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and S. Vaithyanathan, Thumbs up? Sentiment classification using machine learning techniques, In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Process- ing, Philadelphia, PA, 2002, 79-86.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Roget's Thesaurus of English Words and Phrases", "authors": [ { "first": "Peter", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Roget", "middle": [], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Mark Roget, Roget's Thesaurus of English Words and Phrases. Harlow, Essex, England: Longman Group Limited, 1852.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "WordNet-Affect: an affective extension of WordNet", "authors": [ { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "", "middle": [], "last": "Valitutti", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC-2004)", "volume": "", "issue": "", "pages": "1083--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlo Strapparava and A Valitutti, WordNet-Affect: an affective extension of WordNet. In Proceedings of the 4th International Conference on Language Re- sources and Evaluation (LREC-2004), Lisbon, 2004, 1083-1086.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Data Mining: Practical Machine Learning Tools and Techniques", "authors": [ { "first": "H", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Witten", "suffix": "" }, { "first": "", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian H. Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and Techniques (2nd ed.), Morgan Kaufmann, San Francisco, 2005. (www.cs.waikato.ac.nz/ml/weka/)", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploitation in Affect Detection in Open-Ended Improvisational Text", "authors": [ { "first": "L", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Barnden", "suffix": "" }, { "first": "R", "middle": [], "last": "Hendley", "suffix": "" }, { "first": "A", "middle": [], "last": "Wallington", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the ACL Workshop on Sentiment and Subjectivity in Text", "volume": "", "issue": "", "pages": "47--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Zhang, J. Barnden, R. Hendley, and A. Wallington, Exploitation in Affect Detection in Open-Ended Im- provisational Text. In Proceedings of the ACL Work- shop on Sentiment and Subjectivity in Text, 2006, pages 47-54, Sydney, Australia.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "text": "Affective Text: Semeval Task at the 4th International Work-.cs.swarthmore.edu/semeval/tasks/task14/summary.shtml).", "content": "
shoponSemanticEvaluations,2007,Prague
(nlp
", "num": null, "html": null }, "TABREF1": { "type_str": "table", "text": "", "content": "
Emotion ClassNumber of sentences
Happiness536
Sadness173
Anger179
Disgust172
Surprise115
Fear115
No-emotion600
", "num": null, "html": null }, "TABREF5": { "type_str": "table", "text": "", "content": "", "num": null, "html": null } } } }