{ "paper_id": "I08-1039", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:48.413281Z" }, "title": "Learning to Shift the Polarity of Words for Sentiment Classification", "authors": [ { "first": "Daisuke", "middle": [], "last": "Ikeda", "suffix": "", "affiliation": {}, "email": "ikeda@lr.pi.titech.ac.jp" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "", "affiliation": {}, "email": "takamura@pi.titech.ac.jp" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "", "affiliation": {}, "email": "ratinov2@uiuc.edu" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a machine learning based method of sentiment classification of sentences using word-level polarity. The polarities of words in a sentence are not always the same as that of the sentence, because there can be polarity-shifters such as negation expressions. The proposed method models the polarity-shifters. Our model can be trained in two different ways: word-wise and sentence-wise learning. In sentence-wise learning, the model can be trained so that the prediction of sentence polarities should be accurate. The model can also be combined with features used in previous work such as bag-of-words and n-grams. We empirically show that our method almost always improves the performance of sentiment classification of sentences especially when we have only small amount of training data.", "pdf_parse": { "paper_id": "I08-1039", "_pdf_hash": "", "abstract": [ { "text": "We propose a machine learning based method of sentiment classification of sentences using word-level polarity. The polarities of words in a sentence are not always the same as that of the sentence, because there can be polarity-shifters such as negation expressions. The proposed method models the polarity-shifters. Our model can be trained in two different ways: word-wise and sentence-wise learning. In sentence-wise learning, the model can be trained so that the prediction of sentence polarities should be accurate. The model can also be combined with features used in previous work such as bag-of-words and n-grams. We empirically show that our method almost always improves the performance of sentiment classification of sentences especially when we have only small amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to the recent popularity of the internet, individuals have been able to provide various information to the public easily and actively (e.g., by weblogs or online bulletin boards). The information often includes opinions or sentiments on a variety of things such as new products. A huge amount of work has been devoted to analysis of the information, which is called sentiment analysis. The sentiment analysis has been done at different levels including words, sentences, and documents. Among them, we focus on the sentiment classification of sentences, the task to classify sentences into \"positive\" or \"negative\", because this task is fundamental and has a wide applicability in sentiment analysis. For example, we can retrieve individuals' opinions that are related to a product and can find whether they have the positive attitude to the product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There has been much work on the identification of sentiment polarity of words. For instance, \"beautiful\" is positively oriented, while \"dirty\" is negatively oriented. We use the term sentiment words to refer to those words that are listed in a predefined polarity dictionary. Sentiment words are a basic resource for sentiment analysis and thus believed to have a great potential for applications. However, it is still an open problem how we can effectively use sentiment words to improve performance of sentiment classification of sentences or documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The simplest way for that purpose would be the majority voting by the number of positive words and the number of negative words in the given sentence. However, the polarities of words in a sentence are not always the same as that of the sentence, because there can be polarity-shifters such as negation expressions. This inconsistency of word-level polarity and sentence-level polarity often causes errors in classification by the simple majority voting method. A manual list of polarity-shifters, which are the words that can shift the sentiment polarity of another word (e.g., negations), has been suggested. However, it has limitations due to the diversity of expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, we propose a machine learning based method that models the polarity-shifters. The model can be trained in two different ways: word-wise and sentence-wise. While the word-wise learning focuses on the prediction of polarity shifts, the sentence-wise learning focuses more on the prediction of sentence polarities. The model can also be combined with features used in previous work such as bag-of-words, n-grams and dependency trees. We empirically show that our method almost always improves the performance of sentiment classification of sentences especially when we have only small amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. In Section 2, we briefly present the related work. In Section 3, we discuss well-known methods that use word-level polarities and describe our motivation. In Section 4, we describe our proposed model, how to train the model, and how to classify sentences using the model. We present our experiments and results in Section 5. Finally in Section 6, we conclude our work and mention possible future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supervised machine learning methods including Support Vector Machines (SVM) are often used in sentiment analysis and shown to be very promising (Pang et al., 2002; Matsumoto et al., 2005; Kudo and Matsumoto, 2004; Mullen and Collier, 2004; Gamon, 2004) . One of the advantages of these methods is that a wide variety of features such as dependency trees and sequences of words can easily be incorporated (Matsumoto et al., 2005; Kudo and Matsumoto, 2004; Pang et al., 2002) . Our attempt in this paper is not to use the information included in those substructures of sentences, but to use the word-level polarities, which is a resource usually at hand. Thus our work is an instantiation of the idea to use a resource on one linguistic layer (e.g., word level) to the analysis of another layer (sentence level).", "cite_spans": [ { "start": 144, "end": 163, "text": "(Pang et al., 2002;", "ref_id": "BIBREF11" }, { "start": 164, "end": 187, "text": "Matsumoto et al., 2005;", "ref_id": "BIBREF7" }, { "start": 188, "end": 213, "text": "Kudo and Matsumoto, 2004;", "ref_id": "BIBREF5" }, { "start": 214, "end": 239, "text": "Mullen and Collier, 2004;", "ref_id": "BIBREF10" }, { "start": 240, "end": 252, "text": "Gamon, 2004)", "ref_id": "BIBREF1" }, { "start": 404, "end": 428, "text": "(Matsumoto et al., 2005;", "ref_id": "BIBREF7" }, { "start": 429, "end": 454, "text": "Kudo and Matsumoto, 2004;", "ref_id": "BIBREF5" }, { "start": 455, "end": 473, "text": "Pang et al., 2002)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There have been some pieces of work which focus on multiple levels in text. Mao and Lebanon (2006) proposed a method that captures local sentiment flow in documents using isotonic conditional random fields. Pang and Lee (2004) proposed to eliminate objective sentences before the sentiment classification of documents. McDonald et al. (2007) proposed a model for classifying sentences and documents simultaneously. They experimented with joint classification of subjectivity for sentence-level, and sentiment for document-level, and reported that their model obtained higher accuracy than the standard document classification model.", "cite_spans": [ { "start": 76, "end": 98, "text": "Mao and Lebanon (2006)", "ref_id": "BIBREF6" }, { "start": 207, "end": 226, "text": "Pang and Lee (2004)", "ref_id": null }, { "start": 319, "end": 341, "text": "McDonald et al. (2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Although these pieces of work aim to predict not sentence-level but document-level sentiments, their concepts are similar to ours. However, all the above methods require annotated corpora for all levels, such as both subjectivity for sentences and sentiments for documents, which are fairly expensive to obtain. Although we also focus on two different layers, our method does not require such expensive labeled data. What we require is just sentence-level labeled training data and a polarity dictionary of sentiment words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "One of the simplest ways to classify sentences using word-level polarities would be a majority voting, where the occurrences of positive words and those of negative words in the given sentence are counted and compared with each other. However, this majority voting method has several weaknesses. First, the majority voting cannot take into account at all the phenomenon that the word-level polarity is not always the same as the polarity of the sentence. Consider the following example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Voting by Sentiment Words", "sec_num": "3" }, { "text": "I have not had any distortion problems with this phone and am more pleased with this phone than any I've used before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Voting by Sentiment Words", "sec_num": "3" }, { "text": "where negative words are underlined and positive words are double-underlined. The example sentence has the positive polarity, though it locally contains negative words. The majority voting would misclassify it because of the two negative words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Voting by Sentiment Words", "sec_num": "3" }, { "text": "This kind of inconsistency between sentence-level polarity and word-level polarity often occurs and causes errors in the majority voting. The reason is that the majority voting cannot take into account negation expressions or adversative conjunctions, e.g., \"I have not had any ...\" in the example above. Therefore, taking such polarity-shifting into account is important for classification of sentences using a polarity dictionary. To circumvent this problem, Kennedy and Inkpen (2006) and Hu and Liu (2004) proposed to use a manually-constructed list of polarity-shifters. However, it has limitations due to the diversity of expressions.", "cite_spans": [ { "start": 461, "end": 486, "text": "Kennedy and Inkpen (2006)", "ref_id": null }, { "start": 491, "end": 508, "text": "Hu and Liu (2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Simple Voting by Sentiment Words", "sec_num": "3" }, { "text": "Another weakness of the majority voting is that it cannot be easily combined with existing methods that use the n-gram model or tree structures of the sentence as features. The method we propose here can easily be combined with existing methods and show better performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simple Voting by Sentiment Words", "sec_num": "3" }, { "text": "We assume that when the polarity of a word is different from the polarity of the sentence, the polarity of the word is shifted by its context to adapt to the polarity of the sentence. Capturing such polarityshifts will improve the classification performance of the majority voting classifier as well as of more sophisticated classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "In this paper, we propose a word polarity-shifting model to capture such phenomena. This model is a kind of binary classification model which determines whether the polarity is shifted by its context. The model assigns a score s shif t (x, S) to the sentiment word x in the sentence S. If the polarity of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "x is shifted in S, s shif t (x, S) > 0. If the polarity of x is not shifted in S, s shif t (x, S) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "Let w be a parameter vector of the model and \u03c6 be a pre-defined feature function. Function s shif t is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "s shif t (x, S) = w \u2022 \u03c6(x, S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "Since this model is a linear discriminative model, there are well-known algorithms to estimate the parameters of the model. Usually, such models are trained with each occurrence of words as one instance (word-wise learning). However, we can train our model more effectively with each sentence being one instance (sentencewise learning). In this section, we describe how to train our model in two different ways and how to apply the model to a sentence classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Polarity-Shifting Model", "sec_num": "4" }, { "text": "In this learning method, we train the word-level polarity-shift model with each occurrence of sentiment words being an instance. Training examples are automatically extracted by finding sentiment words in labeled sentences. In the example of Section 3, for instance, both negative words (\"distortion\" or \"problems\") and a positive word (\"pleased\") appear in a positive sentence. We regard \"distortion\" and \"problems\", whose polarities are different from that of the sentence, as belonging to the polarityshifted class. On the contrary, we regard \"pleased\", whose polarity is the same as that of the sentence, as not belonging to polarity-shifted class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "We can use the majority voting by those (possibly polarity-shifted) sentiment words. Specifically, we first classify each sentiment word in the sentence according to whether the polarity is shifted or not. Then we use the majority voting to determine the polarity of the sentence. If the first classifier classifies a positive word into the \"polarity-shifted\" class, we treat the word as a negative one. We expect that the majority voting with polarity-shifting will outperform the simple majority voting without polarityshifting. We actually use the weighted majority voting, where the polarity-shifting score for each sentiment word is used as the weight of the vote by the word. We expect that the score works as a confidence measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "We can formulate this method as follows. Here, N and P are respectively defined as the sets of negative sentiment words and positive sentiment words. For instance, x \u2208 N means that x is a negative word. We also write x \u2208 S to express that the word x occurs in S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "First, let us define two scores, score p (S) and score n (S), for the input sentence S. The score p (S) and the score n (S) respectively represent the number of votes for S being positive and the number of votes for S being negative. If score p (S) > score n (S), we regard the sentence S as having the positive polarity, otherwise negative. We suppose that the following relations hold for the scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score p (S) = x\u2208P \u2229S \u2212s shif t (x, S) + x\u2208N \u2229S s shif t (x, S),", "eq_num": "(2)" } ], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "score n (S) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "x\u2208P \u2229S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "s shif t (x, S) + x\u2208N \u2229S \u2212s shif t (x, S). (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "When either a polarity-unchanged positive word (s shif t (x, S) \u2264 0) or a polarity-shifted negative word occurs in the sentence S, score p (S) increases. We can easily obtain the following relation between two scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score p (S) = \u2212score n (S).", "eq_num": "(4)" } ], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "Since, according to this relation, score p (S) > score n (S) is equivalent to score p (S) > 0, we use only score p (S) for the rest of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-wise Learning", "sec_num": "4.1" }, { "text": "The equation 2can be rewritten as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score p (S) = x\u2208S s shif t (x, S)I(x) = x\u2208S w \u2022 \u03c6(x, S)I(x) = w \u2022 x\u2208S \u03c6(x, S)I(x) ,", "eq_num": "(5)" } ], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "where I(x) is the function defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I(x) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 +1 if x \u2208 N , \u22121 if x \u2208 P , 0 otherwise.", "eq_num": "(6)" } ], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "This score p (S) can also be seen as a linear discriminative model and the parameters of the model can be estimated directly (i.e., without carrying out wordwise learning). Each labeled sentence in a corpus can be used as a training instance for the model. In this method, the model is learned so that the predictive ability for sentence classification is optimized, instead of the predictive ability for polarityshifting. Therefore, this model can remain indecisive on the classification of word instances that have little contextual evidence about whether polarityshifting occurs or not. The model can rely more heavily on word instances that have much evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "In contrast, the word-wise learning trains the model with all the sentiment words appearing in a corpus. It is assumed here that all the sentiment words have relations with the sentence-level polarity, and that we can always find the evidence of the phenomena that the polarity of a word is different from that of a sentence. Obviously, this assumption is not always correct. As a result, the word-wise learning sometimes puts a large weight on a context word that is irrelevant to the polarity-shifting. This might degrade the performance of sentence classification as well as of polarity-shifting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-wise Learning", "sec_num": "4.2" }, { "text": "Both methods described in Sections 4.1 and 4.2 are to predict the sentence-level polarity only with the word-level polarity. On the other hand, several methods that use another set of features, for example, bag-of-words, n-grams or dependency trees, were proposed for the sentence or document classification tasks. We propose to combine our method with existing methods. We refer to it as hybrid model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "In recent work, discriminative models including SVM are often used with many different features. These methods are generally represented as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score p (X) = w \u2022 \u03c6 (X),", "eq_num": "(7)" } ], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "where X indicates the target of classification, for example, a sentence or a document. If score p (X) > 0, X is classified into the target class. \u03c6 (X) is a feature function. When the method uses the bag-ofwords model, \u03c6 maps X to a vector with each element corresponding to a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "Here, we define new score function score comb (S) as a linear combination of score p (S), the score function of our sentence-wise learning, and score p (S), the score function of an existing method. Using this, we can write the function as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "score comb (S) = \u03bbscore p (S) + (1 \u2212 \u03bb)score p (S) = \u03bb x\u2208S w \u2022 \u03c6(x, S)I(x) + (1 \u2212 \u03bb)w \u2022 \u03c6 (S) = w comb \u2022 \u03bb x\u2208S \u03c6(x, S)I(x), (1 \u2212 \u03bb)\u03c6 (S) . (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "Note that indicates the concatenation of two vectors, w comb is defined as w, w and \u03bb is a parameter which controls the influence of the word-level polarity-shifting model. This model is also a discriminative model and we can estimate the parameters with a variety of algorithms including SVMs. We can incorporate additional information like bagof-words or dependency trees by \u03c6 (S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Model", "sec_num": "4.3" }, { "text": "Features such as n-grams or dependency trees can also capture some negations or polarity-shifters. For example, although \"satisfy\" is positive, the bigram model will learn \"not satisfy\" as a feature correlated with negative polarity if it appears in the training data. However, the bigram model cannot generalize the learned knowledge to other features such as \"not great\" or \"not disappoint\". On the other hand, our polarity-shifter model learns that the word \"not\" causes polarity-shifts. Therefore, even if there was no \"not disappoint\" in training data, our model can determine that \"not disappoint\" has correlation with positive class, because the dictionary contains \"disappoint\" as a negative word. For this reason, the polarity-shifting model can be learned even with smaller training data. What we can obtain from the proposed method is not only a set of polarity-shifters. We can also obtain the weight vector w, which indicates the strength of each polarity-shifter and is learned so that the predictive ability of sentence classification is optimized especially in the sentence-wise learning. It is impossible to manually determine such weights for numerous features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "It is also worth noting that all the models proposed in this paper can be represented as a kernel function. For example, the hybrid model can be seen as the following kernel:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K comb (S 1 , S 2 ) = \u03bb x i \u2208S 1 x j \u2208S 2 K((x i , S 1 ), (x j , S 2 )) +(1 \u2212 \u03bb)K (S 1 , S 2 ).", "eq_num": "(9)" } ], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "Here, K means the kernel function between words and K means the kernel function between sentences respectively. In addition,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "x i x j K((x i , S 1 ), (x j , S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "2 )) can be seen as an instance of convolution kernels, which was proposed by Haussler (1999) . Convolution kernels are a general class of kernel functions which are calculated on the basis of kernels between substructures of inputs. Our proposed kernel treats sentences as input, and treats sentiment words as substructures of sentences. We can use high degree polynomial kernels as both K which is a kernel between substructures, i.e. sentiment words, of sentences, and K which is a kernel between sentences to make the classifiers take into consideration the combination of features.", "cite_spans": [ { "start": 78, "end": 93, "text": "Haussler (1999)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions on the Proposed Model", "sec_num": "4.4" }, { "text": "We used two datasets, customer reviews 1 (Hu and Liu, 2004) and movie reviews 2 (Pang and Lee, 2005) to evaluate sentiment classification of sentences. Both of these two datasets are often used for evaluation in sentiment analysis researches. The number of examples and other statistics of the datasets are shown in Table 1 .", "cite_spans": [ { "start": 41, "end": 59, "text": "(Hu and Liu, 2004)", "ref_id": "BIBREF3" }, { "start": 80, "end": 100, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 316, "end": 323, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "Our method cannot be applied to sentences which contain no sentiment words. We therefore eliminated such sentences from the datasets. \"Available\" in Table 1 means the number of examples to which our method can be applied. \"Sentiment Words\" shows the number of sentiment words that are found in the given sentences. Please remember that sentiment words are defined as those words that are listed in a predefined polarity dictionary in this paper. \"Inconsistent Words\" shows the number of the words whose polarities conflicted with the polarity of the sentence.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 156, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "We performed 5-fold cross-validation and used the classification accuracy as the evaluation measure. We extracted sentiment words from General Inquirer (Stone et al., 1996) and constructed a polarity dictionary. After some preprocessing, the dictionary contains 2,084 positive words and 2,685 negative words.", "cite_spans": [ { "start": 152, "end": 172, "text": "(Stone et al., 1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "We employed the Max Margin Online Learning Algorithms for parameter estimation of the model (Crammer et al., 2006; McDonald et al., 2007) . In preliminary experiments, this algorithm yielded equal or better results compared to SVMs. As the feature representation, \u03c6(x, S) , of polarity-shifting model, we used the local context of three words to the left and right of the target sentiment word. We used the polynomial kernel of degree 2 for polarity-shifting model and the linear kernel for oth- and \u03c6 (S) are normalized respectively.", "cite_spans": [ { "start": 92, "end": 114, "text": "(Crammer et al., 2006;", "ref_id": "BIBREF0" }, { "start": 115, "end": 137, "text": "McDonald et al., 2007)", "ref_id": null } ], "ref_spans": [ { "start": 248, "end": 271, "text": "representation, \u03c6(x, S)", "ref_id": null } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.2" }, { "text": "We compared the following methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Baseline classifies all sentences as positive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 BoW uses unigram features. 2gram uses unigrams and bigrams. 3gram uses unigrams, bigrams, and 3grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Simple-Voting is the most simple majority voting with word-level polarity (Section 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Negation Voting proposed by Hu and Liu (2004) is the majority voting that takes negations into account. As negations, we employed not, no, yet, never, none, nobody, nowhere, nothing, and neither, which are taken from (Polanyi and Zaenen, 2004; Kennedy and Inkpen, 2006; Hu and Liu, 2004 ) (Section 3).", "cite_spans": [ { "start": 30, "end": 47, "text": "Hu and Liu (2004)", "ref_id": "BIBREF3" }, { "start": 219, "end": 245, "text": "(Polanyi and Zaenen, 2004;", "ref_id": "BIBREF15" }, { "start": 246, "end": 271, "text": "Kennedy and Inkpen, 2006;", "ref_id": null }, { "start": 272, "end": 288, "text": "Hu and Liu, 2004", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Word-wise was described in Section 4.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Sentence-wise was described in Section 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\u2022 Hybrid BoW, hybrid 2gram, hybrid 3gram are combinations of sentence-wise model and respectively BoW, 2gram and 3gram (Section 4.3). We set \u03bb = 0.5. Table 2 shows the results of these experiments. Hybrid 3gram, which corresponds to the proposed method, obtained the best accuracy on customer review dataset. However, on movie review dataset, the proposed method did not outperform 3gram. In Section 5.4, we will discuss this result in details. Comparing word-wise to simple-voting, the accuracy increased by about 7 points. This means that the polarity-shifting model can capture the polarityshifts and it is an important factor for sentiment classification. In addition, we can see the effectiveness of sentence-wise, by comparing it to word-wise in accuracy.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "\"Opt\" in Table 2 shows the results of hybrid models with optimal \u03bb and combination of models. The optimal results of hybrid models achieved the best accuracy on both datasets.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "We show some dominating polarity-shifters obtained through learning. We obtained many negations (e.g., no, not, n't, never), modal verbs (e.g., might, would, may), prepositions (e.g., without, despite), comma with a conjunction (e.g., \", but\" as in \"the case is strong and stylish, but lacks a window\"), and idiomatic expressions (e.g., \"hard resist\" as in \"it is hard to resist\", and \"real snooze\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of the Methods", "sec_num": "5.3" }, { "text": "When we have a large amount of training data, the ngram classifier can learn well whether each n-gram tends to appear in the positive class or the negative class. However, when we have only a small amount of training data, the n-gram classifier cannot capture such tendency. Therefore the external knowledge, such as word-level polarity, could be more valuable information for classification. Thus it is expected that the sentence-wise model and the hybrid model will outperform n-gram classifier which does not take word-level polarity into account, more largely with few training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "5.4" }, { "text": "To verify this conjecture, we conducted experiments by changing the number of the training examples, i.e., the labeled sentences. We evaluated three models: sentence-wise, 3gram model and hybrid 3gram on both customer review and movie review. Figures 1 and 2 show the results on customer review and movie review respectively. When the size of the training data is small, sentence-wise outper- forms 3gram on both datasets. We can also see that the advantage of sentence-wise becomes smaller as the amount of training data increases, and that the hybrid 3gram model almost always achieved the best accuracy among the three models. Similar behaviour was observed when we ran the same experiments with 2gram or BoW model. From these results, we can conclude that, as we expected above, the wordlevel polarity is especially effective when we have only a limited amount of training data, and that the hybrid model can combine two models effectively.", "cite_spans": [], "ref_spans": [ { "start": 243, "end": 258, "text": "Figures 1 and 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Effect of Training Data Size", "sec_num": "5.4" }, { "text": "We proposed a model that captures the polarityshifting of sentiment words in sentences. We also presented two different learning methods for the model and proposed an augmented hybrid classifier that is based both on the model and on existing classifiers. We evaluated our method and reported that the proposed method almost always improved the accuracy of sentence classification compared with other simpler methods. The improvement was more significant when we have only a limited amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "For future work, we plan to explore new feature sets appropriate for our model. The feature sets we used for evaluation in this paper are not necessarily optimal and we can expect a better performance by exploring appropriate features. For example, dependency relations between words or appearances of conjunctions will be useful. The position of a word in the given sentence is also an important factor in sentiment analysis (Taboada and Grieve, 2004) . Furthermore, we should directly take into account the fact that some words do not affect the polarity of the sentence, though the proposed method tackled this problem indirectly. We cannot avoid this problem to use word-level polarity more effectively. Lastly, since we proposed a method for the sentence-level sentiment prediction, our next step is to extend the method to the document-level sentiment prediction.", "cite_spans": [ { "start": 426, "end": 452, "text": "(Taboada and Grieve, 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "http://www.cs.uic.edu/\u02dcliub/FBS/FBS. html 2 http://www.cs.cornell.edu/people/pabo/ movie-review-data/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by Overseas Advanced Educational Research Practice Support Program by Ministry of Education, Culture, Sports, Science and Technology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Online Passive-Aggressive Algorithms", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2006, "venue": "In Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "551--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online Passive- Aggressive Algorithms. In Journal of Machine Learn- ing Research, Vol.7, Mar, pp.551-585, 2006.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING-2004)", "volume": "", "issue": "", "pages": "841--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon. Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis. In Proceedings of the 20th International Conference on Computational Lin- guistics (COLING-2004) , pp.841-847, 2004.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convolution Kernels on Discrete Structures", "authors": [ { "first": "David", "middle": [], "last": "Haussler", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Haussler. Convolution Kernels on Discrete Struc- tures, Technical Report UCS-CRL-99-10, University of California in Santa Cruz, 1999.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mining Opinion Features in Customer Reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Nineteeth National Conference on Artificial Intellgience (AAAI-2004)", "volume": "", "issue": "", "pages": "755--560", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. Mining Opinion Features in Customer Reviews. In Proceedings of Nineteeth National Conference on Artificial Intellgience (AAAI- 2004) , pp.755-560, San Jose, USA, July 2004.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sentiment Classification of Movie and Product Reviews Using Contextual Valence Shifters", "authors": [ { "first": "Alistair", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" } ], "year": 2005, "venue": "Workshop on the Analysis of Formal and Informal Information Exchange during Negotiations (FINEXIN-2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair Kennedy and Diana Inkpen. Sentiment Classi- fication of Movie and Product Reviews Using Con- textual Valence Shifters. In Workshop on the Analysis of Formal and Informal Information Exchange during Negotiations (FINEXIN-2005), 2005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Boosting Algorithm for Classification of Semi-Structured Text", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "301--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. A Boosting Algorithm for Classification of Semi-Structured Text. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP-2004), pp.301- 308, 2004.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Isotonic Conditional Random Fields and Local Sentiment Flow", "authors": [ { "first": "Yu", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lebanon", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Newral Information Processing Systems (NIPS-2006)", "volume": "", "issue": "", "pages": "961--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Mao and Guy Lebanon. Isotonic Conditional Ran- dom Fields and Local Sentiment Flow. In Proceedings of the Newral Information Processing Systems (NIPS- 2006), pp.961-968, 2006.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sentiment Classification using Word Sub-Sequences and Dependency Sub-Trees", "authors": [ { "first": "Shotaro", "middle": [], "last": "Matsumoto", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 9th Pacific-Asia International Conference on Knowledge Discovery and Data Mining (PAKDD-2005)", "volume": "", "issue": "", "pages": "301--310", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shotaro Matsumoto, Hiroya Takamura, and Manabu Okumura. Sentiment Classification using Word Sub- Sequences and Dependency Sub-Trees. In Proceed- ings of the 9th Pacific-Asia International Conference on Knowledge Discovery and Data Mining (PAKDD- 2005), pp.301-310 , 2005.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Structured Models for Fine-to-Coarse Sentiment Analysis", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Kerry", "middle": [], "last": "Hannan", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Neylon", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Wells", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Reynar", "suffix": "" } ], "year": null, "venue": "Proceedings of the 45th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. Structured Models for Fine-to- Coarse Sentiment Analysis. In Proceedings of the 45th", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Annual Meeting of the Association for Computational Linguistics (ACL-2007)", "authors": [], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "432--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (ACL-2007), pp.432-439, 2007.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sentiment analysis using support vector machines with diverse information sources", "authors": [ { "first": "Tony", "middle": [], "last": "Mullen", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Collier", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "412--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Mullen and Nigel Collier. Sentiment analysis us- ing support vector machines with diverse informa- tion sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-2004), pp.412-418, 2004.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "76--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. Thumbs up? Sentiment Classification using Machine Learning Techniques. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP-2002), pp.76-86, 2002.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": null, "venue": "Proceedings of the 42th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In Proceedings of the 42th", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Association for Computational Linguistics (ACL-2004)", "authors": [], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "271--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (ACL-2004), pp.271-278, 2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-2005)", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguis- tics (ACL-2005), pp.115-124, 2005.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Contextual Valence Shifters", "authors": [ { "first": "Livia", "middle": [], "last": "Polanyi", "suffix": "" }, { "first": "Annie", "middle": [], "last": "Zaenen", "suffix": "" } ], "year": 2004, "venue": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI-EAAT2004)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livia Polanyi and Annie Zaenen. Contextual Valence Shifters. In AAAI Spring Symposium on Exploring At- titude and Affect in Text: Theories and Applications (AAAI-EAAT2004), 2004.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The General Inquirer: A Computer Approach to Content Analysis", "authors": [ { "first": "Philip", "middle": [ "J" ], "last": "Stone", "suffix": "" }, { "first": "Dexter", "middle": [ "C" ], "last": "Dunphy", "suffix": "" }, { "first": "Marshall", "middle": [ "S" ], "last": "Smith", "suffix": "" }, { "first": "Daniel", "middle": [ "M" ], "last": "Ogilvie", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip J. Stone, Dexter C. Dunphy, Marshall S. Smith, and Daniel M. Ogilvie. The General Inquirer: A Com- puter Approach to Content Analysis. The MIT Press, 1996.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Analyzing Appraisal Automatically", "authors": [ { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Grieve", "suffix": "" } ], "year": 2004, "venue": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI-EAAT2004)", "volume": "", "issue": "", "pages": "158--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maite Taboada and Jack Grieve. Analyzing Appraisal Automatically. In AAAI Spring Symposium on Explor- ing Attitude and Affect in Text: Theories and Applica- tions (AAAI-EAAT2004), pp.158-161, 2004.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Experimental results on customer review Figure 2: Experimental results on movie review", "uris": null }, "TABREF0": { "type_str": "table", "text": "", "html": null, "num": null, "content": "
Statistics of the corpus
customer movie
# of Labeled Sentences1,70010,662
Available1,4369,492
# of Sentiment Words3,27626,493
Inconsistent Words1,07610,674
" }, "TABREF1": { "type_str": "table", "text": "Experimental results of the sentence classi-", "html": null, "num": null, "content": "
fication
methodscustomer movie
Baseline0.6380.504
BoW0.7900.724
2gram0.8090.756
3gram0.8000.762
Simple-Voting0.7160.624
Negation Voting0.7330.658
Word-wise0.7830.699
Sentence-wise0.8060.718
Hybrid BoW0.8270.748
Hybrid 2gram0.8400.755
Hybrid 3gram0.8370.758
Opt0.8400.770
ers, and feature vectors are normalized to 1. In hy-
brid models, the feature vectors, x\u2208S \u03c6(x, S)I(x)
" } } } }