Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
88.2 kB
{
"paper_id": "I11-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:31:24.462542Z"
},
"title": "Enhancing the HL-SOT Approach to Sentiment Analysis via a Localized Feature Selection Framework",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Norwegian University of Science",
"location": {}
},
"email": "wwei@idi.ntnu.no"
},
{
"first": "Jon",
"middle": [
"Atle"
],
"last": "Gulla",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Norwegian University of Science",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a Localized Feature Selection (LFS) framework tailored to the HL-SOT approach to sentiment analysis. Within the proposed LFS framework, each node classifier of the HL-SOT approach is able to perform classification on target texts in a locally customized index term space. Extensive empirical analysis against a human-labeled data set demonstrates that with the proposed LFS framework the classification performance of the HL-SOT approach is enhanced with computational efficiency being greatly gained. To find the best feature selection algorithm that caters to the proposed LFS framework, five classic feature selection algorithms are comparatively studied, which indicates that the TS, DF, and MI algorithms achieve generally better performances than the CHI and IG algorithms. Among the five studied algorithms, the T-S algorithm is best to be employed by the proposed LFS framework.",
"pdf_parse": {
"paper_id": "I11-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a Localized Feature Selection (LFS) framework tailored to the HL-SOT approach to sentiment analysis. Within the proposed LFS framework, each node classifier of the HL-SOT approach is able to perform classification on target texts in a locally customized index term space. Extensive empirical analysis against a human-labeled data set demonstrates that with the proposed LFS framework the classification performance of the HL-SOT approach is enhanced with computational efficiency being greatly gained. To find the best feature selection algorithm that caters to the proposed LFS framework, five classic feature selection algorithms are comparatively studied, which indicates that the TS, DF, and MI algorithms achieve generally better performances than the CHI and IG algorithms. Among the five studied algorithms, the T-S algorithm is best to be employed by the proposed LFS framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With tens and thousands of review texts being generated online, it becomes increasingly challenging for an individual to exhaustively collect and study the online reviews. Therefore, research on automatic sentiment analysis on review texts has emerged as a popular topic at the crossroads of information retrieval and computational linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentiment analysis on product reviews aims at extracting sentiment information from texts. It includes two tasks, i.e., labeling a target text 1 with 1 Each product review to be analyzed is called target text 1) the product's attributes it mentions (attributes identification task), and 2) the corresponding sentiments mentioned therein (sentiment annotation task). Recently, Wei and Gulla proposed the HL-SOT approach (Wei and Gulla, 2010) , i.e., Hierarchical Learning (HL) with Sentiment Ontology Tree (SOT), that is able to achieve the two tasks in one hierarchical classification process. In the HL-SOT approach, each target text is encoded by a vector in a globally unified d-dimensional index term space and is respectively labeled by different nodes 2 of SOT in a hierarchical manner. Although the HL-SOT approach is reported with promising classification performance on tasks of sentiment analysis, its computational efficiency, especially as d increases, becomes very low. Furthermore, as d increases it will have more chance to index noisy term into the globally unified index term space so that the classification performance of the HL-SOT approach might be depressed. Hence, we argue that if a locally customized index term space could be constructed for each node respectively, both the computational efficiency and the classification performance of the HL-SOT approach would be improved.",
"cite_spans": [
{
"start": 150,
"end": 151,
"text": "1",
"ref_id": null
},
{
"start": 419,
"end": 440,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a Localized Feature Selection (LFS) framework tailored to the HL-SOT approach. The rationale of the proposed LFS framework draws on the following two observations. Firstly, a feature term that is relevant to a node is usually irrelevant to nodes which stay at another branch of SOT. For example, \"ergonomics\" might be a feature term for the node \"design and usability\" (see Fig. 1 ) but it is irrelevant to the node \"image quality\". Secondly, a feature terin the following of this paper. Figure 1 : an example of part of a SOT for digital camera m might become insignificant for child nodes of i even if the feature term is significant to i. For example, for a sentence commenting on a digital camera like \"40D handles noise very well\", terms such as \"noise\" and \"well\" are significant feature terms for the node \"noise\". However, the term \"noise\" becomes insignificant for its child nodes \"noise +\" and \"noise -\", since the hierarchical classification characteristic of the HL-SOT approach that a node only processes target texts which are labeled as true by its parent node ensures that each target text handled by the nodes \"noise +\" and \"noise -\" is already classified as related to \"noise\".",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 406,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 514,
"end": 522,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the proposed LFS framework, the concept of \"local hierarchy\" is defined and introduced as delimitation of local scope of nodes. The localized feature selection process is conducted for each node within its local scope to generate the customized index term space for the node. The proposed LFS framework is empirically analyzed on a human-labeled data set. The experimental results show that with the proposed LFS framework the classification performance of the HL-SOT approach is enhanced and the computational efficiency is significantly improved. To test which is the best to be employed by the proposed LFS framework, we further comparatively study five classic feature selection algorithms respectively based on document frequency (DF) (Manning et al., 2008) , mutual information (MI) (Manning et al., 2008; Church and Hanks, 1990) , \u03c7 2statistic (CHI) (Manning et al., 2008) , information gain (IG) (Mitchell, 1997) , and term strength (T-S) (Wilbur and Sirotkin, 1992) . The comparatively experimental results suggest that the TS, DF, and MI algorithms achieve generally better performance than the CHI and IG algorithms. Among the five employed algorithms, the TS algorithm is the best to be employed by the proposed LFS framework. This paper makes the following contributions:",
"cite_spans": [
{
"start": 743,
"end": 765,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 792,
"end": 814,
"text": "(Manning et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 815,
"end": 838,
"text": "Church and Hanks, 1990)",
"ref_id": "BIBREF1"
},
{
"start": 860,
"end": 882,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 907,
"end": 923,
"text": "(Mitchell, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 950,
"end": 977,
"text": "(Wilbur and Sirotkin, 1992)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a LFS framework to enhance the classification performance and improve the computational efficiency of the HL-SOT approach;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct a comparative study on five feature selection algorithms that can be employed in the proposed LFS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows. In section 2, we discuss an overview of related work on sentiment analysis. In section 3, we review the HL-SOT approach proposed in (Wei and Gulla, 2010) . In section 4, we present the proposed LFS framework. The empirical analysis and the results are presented in section 5. Finally, we conclude the paper and discuss the future work in section 6.",
"cite_spans": [
{
"start": 184,
"end": 205,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on sentiment analysis was originally performed to extract overall sentiments from target texts. However, as shown in the experiments in (Turney, 2002) , the whole sentiment of a document is not necessarily the sum of its parts. Recent work has shifted the focus from overall document sentiment to sentiment analysis based on product attributes (Hu and Liu, 2004; Popescu and Etzioni, 2005; Ding and Liu, 2007; Liu et al., 2005) . Document overall sentiment analysis is to summarize the overall sentiment in the document, which relies on two finer levels of sentiment annotation: word-level sentiment annotation and phrase-level sentiment annotation. The wordlevel sentiment annotation is to utilize the polarity annotation of words in each sentence and summarize the overall sentiment of each sentimentbearing word to infer the overall sentiment within the text (Hatzivassiloglou and Wiebe, 2000; Andreevskaia and Bergler, 2006; Esuli and Sebastiani, 2005; Esuli and Sebastiani, 2006; Hatzivassiloglou and McKeown, 1997; Kamps et al., 2004; Devitt and Ahmad, 2007; Yu and Hatzivassiloglou, 2003) . The phrase-level sentiment annotation focuses sentiment annotation on phrases not words with concerning that atomic units of expression is not individual words but rather appraisal groups (Whitelaw et al., 2005) . In (Wilson et al., 2005) , the concepts of prior polarity and contextual polarity were proposed. This paper presented a system that is able to automatically identify the contextual polarity for a large subset of sentiment expressions. In (Turney, 2002) , an unsupervised learning algorithm was proposed to classify reviews as recommended or not recommended by averaging sentiment annotation of phrases in reviews that contain adjectives or adverbs. However, the performances of these approaches are not satisfactory for sentiment analysis on product reviews, where sentiment on each attribute of a product could be so complicated that it is unable to be expressed by overall document sentiment.",
"cite_spans": [
{
"start": 145,
"end": 159,
"text": "(Turney, 2002)",
"ref_id": "BIBREF18"
},
{
"start": 353,
"end": 371,
"text": "(Hu and Liu, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 372,
"end": 398,
"text": "Popescu and Etzioni, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 399,
"end": 418,
"text": "Ding and Liu, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 419,
"end": 436,
"text": "Liu et al., 2005)",
"ref_id": "BIBREF10"
},
{
"start": 871,
"end": 905,
"text": "(Hatzivassiloglou and Wiebe, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 906,
"end": 937,
"text": "Andreevskaia and Bergler, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 938,
"end": 965,
"text": "Esuli and Sebastiani, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 966,
"end": 993,
"text": "Esuli and Sebastiani, 2006;",
"ref_id": "BIBREF5"
},
{
"start": 994,
"end": 1029,
"text": "Hatzivassiloglou and McKeown, 1997;",
"ref_id": "BIBREF6"
},
{
"start": 1030,
"end": 1049,
"text": "Kamps et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 1050,
"end": 1073,
"text": "Devitt and Ahmad, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 1074,
"end": 1104,
"text": "Yu and Hatzivassiloglou, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 1295,
"end": 1318,
"text": "(Whitelaw et al., 2005)",
"ref_id": "BIBREF20"
},
{
"start": 1324,
"end": 1345,
"text": "(Wilson et al., 2005)",
"ref_id": "BIBREF22"
},
{
"start": 1559,
"end": 1573,
"text": "(Turney, 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Attributes-based sentiment analysis is to analyze sentiment based on each attribute of a product. In (Hu and Liu, 2004) , mining product features was proposed together with sentiment polarity annotation for each opinion sentence. In that work, sentiment analysis was performed at the product attributes level. In (Liu et al., 2005) , a system with framework for analyzing and comparing consumer opinions of competing products was proposed. The system made users be able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. In (Popescu and Etzioni, 2005 ), Popescu and Etzioni not only analyzed polarity of opinions regarding product features but also ranked opinions based on their strength. In , Liu et al. proposed Sentiment-PLSA that analyzed blog entries and viewed them as a document generated by a number of hidden sentiment factors. These sentiment factors may also be factors based on product attributes. In (Lu and Zhai, 2008) , Lu et al. proposed a semi-supervised topic models to solve the problem of opinion integration based on the topic of a product's attributes. The work in (Titov and McDonald, 2008) presented a multi-grain topic model for extracting the ratable attributes from product reviews. In (Lu et al., 2009) , the problem of rated attributes summary was studied with a goal of generating ratings for major aspects so that a user could gain different perspectives towards a target entity. In a most recent research work (Wei and Gulla, 2010) , Wei and Gulla proposed the HL-SOT approach that sufficiently utilizes the hierarchical relationships among a product attributes and solves the sentiment analysis problem in a hierarchical classification process. However, the HL-SOT approach proposed in (Wei and Gulla, 2010 ) uses a globally unified index term space to encode target texts for different nodes which is deemed to limit the performance of the HL-SOT approach. Therefore, the LFS framework proposed in this paper aims at overcoming the weakness of the HL-SOT approach and consequently improving its performance by generating a locally customized index term space for each node.",
"cite_spans": [
{
"start": 101,
"end": 119,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 313,
"end": 331,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF10"
},
{
"start": 597,
"end": 623,
"text": "(Popescu and Etzioni, 2005",
"ref_id": "BIBREF16"
},
{
"start": 987,
"end": 1006,
"text": "(Lu and Zhai, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 1161,
"end": 1187,
"text": "(Titov and McDonald, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 1287,
"end": 1304,
"text": "(Lu et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 1516,
"end": 1537,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
},
{
"start": 1793,
"end": 1813,
"text": "(Wei and Gulla, 2010",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the HL-SOT approach (Wei and Gulla, 2010) , each target text is indexed by a vector x \u2208 X ,",
"cite_spans": [
{
"start": 23,
"end": 44,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "X = R d . Weight vectors w i (1 \u2264 i \u2264 N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "define linearthreshold classifiers of each node i in SOT so that the target text x is labeled true by node i if x is labeled true by i's parent node and w i \u2022x \u2265 \u03b8 i . The parameters w i and \u03b8 i are learned from the training data set: D = {(r, l)|r \u2208 X , l \u2208 Y}, where Y denotes the set of label vectors. In the training process, when a new instance r t is observed, each row vector w i,t is updated by a regularized least squares estimator given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "wi,t = (I + S i,Q(i,t\u22121) S \u22a4 i,Q(i,t\u22121) + rtr \u22a4 t ) \u22121 \u00d7S i,Q(i,t\u22121) (li,i 1 , li,i 2 , ..., li,i Q(i,t\u22121) ) \u22a4 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "where I is a d \u00d7 d identity matrix, Q(i, t \u2212 1) denotes the number of times the parent of node i observes a positive label before observing the in-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "stance r t , S i,Q(i,t\u22121) = [r i 1 , ..., r i Q(i,t\u22121) ] is a d \u00d7 Q(i, t\u22121) matrix whose columns are the instances r i 1 , ..., r i Q(i,t\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": ", and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "(l i,i 1 , l i,i 2 , ..., l i,i Q(i,t\u22121) ) \u22a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "is a Q(i, t\u22121)-dimensional vector of the corresponding labels observed by node i. The Formula 1 restricts that the weight vector w i,t of the classifier i is only updated on the examples that are positive for its parent node. Then the label vector\u0177 rt is computed for the instance r t , before the real label vector l rt is observed. Then the current threshold vector \u03b8 t is updated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = \u03b8 t + \u03f5(\u0177 rt \u2212 l rt ),",
"eq_num": "(2)"
}
],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "where \u03f5 is a small positive real number that denotes a corrective step for the current threshold vector \u03b8 t . After the training process for each node of SOT, each target text is to be labeled by each node i parameterized by the weight vector w i and the threshold \u03b8 i in the hierarchical classification process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The HL-SOT Approach Review",
"sec_num": "3"
},
{
"text": "In this section, we propose the LFS framework to generate a locally customized index term space for each node of SOT respectively. We first discuss why localized feature selection is needed for the HL-SOT approach. Then we define the concept of local hierarchy of SOT to introduce the local feature selection scope of a node, followed by a presentation on the local hierarchy based feature selection process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Localized Feature Selection",
"sec_num": "4"
},
{
"text": "One deficiency of the HL-SOT approach is that it uses a globally unified index term space to index target texts, which cannot efficiently encode feature information required by each local individual node of SOT. When we look into the detailed classification process of each node of SOT, we observe the following two types of phenomena. Firstly, SOT organizes domain knowledge in a tree like structure. Within a particular domain knowledge represented by SOT, nodes that stay in different branches of SOT represent independent different attributes in that domain. In this way, feature terms (e.g., the term \"ergonomics\") that are relevant to a node (e.g., the node \"design and usability\") might be irrelevant to other nodes (e.g., the node \"image quality\") that stay at another branches of SOT; Secondly, the HL-SOT approach labels each target text in a hierarchical order which ensures that each target text that comes to be handled by a node has already been labeled as true by its parent node. Due to this characteristic, feature terms (e.g., the term \"noise\") that are significant to a node i (e.g., the node \"noise\") might become a trivial term for i's child nodes (e.g., the nodes \"noise +\" and \"noise -\"). Therefore, the purpose of the localized feature selection is to filter out irrelevant terms that are insignificant to each individual node and build a locally customized index term space for the node so that the performance of the node can be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Localized Feature Selection for the HL-SOT",
"sec_num": "4.1"
},
{
"text": "In order to select locally customized feature terms for each individual node, we need to define a suitable scope, called local feature selection scope 3 , within which the feature selection process can be effectively conducted for the node. Since the HL-SOT approach is a hierarchical classification process, before we introduce the local scope for a node we first give a formal definition on local hierarchy of SOT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Feature Selection Scope for a Node",
"sec_num": "4.2"
},
{
"text": "Definition 1 [Local Hierarchy] A local hierarchy \u2206 u of SOT is defined to be formed by all the child nodes of u in SOT, where the node u must be a non-leaf node of the SOT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Feature Selection Scope for a Node",
"sec_num": "4.2"
},
{
"text": "By the Definition 1, we say all the child nodes of u are on the same local hierarchy under u which is denoted by \u2206 u . For examples, in Fig. 2 nodes \"camera +\", \"design and usability\", \"image quality\", \"lens\", \"camera -\" are deemed on the same local hierarchy under the node \"camera\" and nodes \"weight +\", \"weight -\" are deemed on the same local hierarchy under the node \"weight\", etc. In the hierarchical labeling process of the HL-SOT approach, after a target text is labeled as true by a node i it will go further to the local hierarchy under i and is to be labeled by all nodes on the local hierarchy \u2206 i . For a target text the labeling processes of nodes on \u2206 i locally can be considered as a multi-label classification process where each node is a local label. Therefore, the measurement for selecting terms as features should be calculated among nodes on the same hierarchy. Hence, the local scope for a node is defined within the local hierarchy which the node is on.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 142,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Local Feature Selection Scope for a Node",
"sec_num": "4.2"
},
{
"text": "In the proposed LFS framework, local feature selection for a node i of SOT is performed within the local scope of the node i. Since nodes on the same local hierarchy share the same local scope, local feature selection process for all nodes of SOT is achieved in local hierarchy based manner. Specifically, for the feature selection process on a local hierarchy \u2206, let c 1 , c 2 , ..., c K denote the K nodes on \u2206. Let D denote the training data set for the HL-SOT approach. Let D c k denote the set of instances in D that contains the label of the node c k (1 k K). Let D \u2206 denote the training corpus for the local hierarchy \u2206 which is the set of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Hierarchy Based Feature Selection",
"sec_num": "4.3"
},
{
"text": "D \u2206 = \u222a K k=1 D c k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Hierarchy Based Feature Selection",
"sec_num": "4.3"
},
{
"text": "Let V c k denote the set of all the vocabularies that appears in D c k . Let s c k (w) denote the term score that measures the suitability of w as a feature for node c k . Let F c k denote the set of feature terms selected for c k . Let d c k denote the number of features to be selected in F c k . A local feature selection process for nodes on the local hierarchy \u2206 is described in Algorithm 1. In the data initialization phase of the Algorithm 1, the data instance set D c k and vocabulary set V c k for each node on the local hierarchy \u2206 as well as the training corpus D \u2206 are established. In a local feature selection process, a term score s c k (w) for each term w \u2208 V c k can be calculated by a specified feature selection algorithm, taking D \u2206 as the training corpus and D c k as the data instance set in the class c k . The local feature selection process can employ any specific feature selection algorithm to calculate the term scores. After all terms in V c k are calculated, those terms with top d c k scores are selected to establish the feature space F c k for the node c k . Since the number of terms in V c k varies from node to node, in order to produce a rational dimensionality d c k for the established feature space F c k , we introduce a feature selection rate, denoted by \u03b3, to control d c k for each node c k , i.e., d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Hierarchy Based Feature Selection",
"sec_num": "4.3"
},
{
"text": "c k = |V c k | \u00d7 \u03b3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Localized Feature Selection Algorithm",
"sec_num": null
},
{
"text": "After local feature selection processes for all the nods of SOT are accomplished, a locally customized index term space F c k for each node c k is established. Each target text will be respectively indexed by a customized vector x c k \u2208 X c k (X c k = R dc k ) when it goes through the hierarchical classification process of the HL-SOT approach. In next section, we will present the empirical analysis on evaluating the proposed LFS framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1 Localized Feature Selection Algorithm",
"sec_num": null
},
{
"text": "In this section, we conduct extensive experiments to empirically analyze the proposed LFS framework. Our experiments are intended to address the following questions: (1) can the classification performance of the HL-SOT approach be improved with the LFS framework; (2) how much computational efficiency can be gained for the HL-SOT to be implemented in the LFS framework; (3) how are the comparative performances produced by different feature selection algorithms when employed in the proposed LFS framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Analysis",
"sec_num": "5"
},
{
"text": "We construct our data set based on the digital camera review data set used in the HL-SOT approach (Wei and Gulla, 2010) . In total, the constructed data set contains 1500 snippets of customer reviews on digital cameras, where 35 attributes of a digital camera are mentioned in the review data. We build an ontology structure to organize the mentioned attributes and label each review text with correspondent attributes as well as sentiments, which complying the rule that if a review text is assigned with a label of a node then it is assigned with a label of the parent node. We randomly divide the labeled data set into five folds so that each fold at least contains one example instance labeled by each attribute node. To catch the statistical significance of experimental results, we perform 5 cross-fold evaluation by using four folds as training data and the other one fold as testing data. All the experimental results presented in this section are averaged over 5 runs of each experiment.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Preparation",
"sec_num": "5.1"
},
{
"text": "Since the existing HL-SOT approach is a hierarchical classification process, we use the same three classic loss functions (Wei and Gulla, 2010) for measuring classification performance. They are respectively the One-error Loss (O-Loss) function, the Symmetric Loss (S-Loss) function, and the Hierarchical Loss (H-Loss) function 4 :",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Wei and Gulla, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "\u2022 One-error loss (O-Loss) function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LO(\u0177, l) = B(\u2203i :\u0177i \u0338 = li),",
"eq_num": "(3)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "where\u0177 is the prediction label vector and l is the true label vector; B(S) is a boolean function which is 1 if and only if the statement S is true, otherwise it is 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "\u2022 Symmetric loss (S-Loss) function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LS(\u0177, l) = N \u2211 i=1 B(\u0177i \u0338 = li),",
"eq_num": "(4)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "\u2022 Hierarchical loss (H-Loss) function is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LH (\u0177, l) = N \u2211 i=1 B(\u0177i \u0338 = li \u2227 \u2200j \u2208 A(i),\u0177j = lj),",
"eq_num": "(5)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "where A denotes a set of nodes that are ancestors of node i in SOT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "In this section, we conduct experiments to show performance improvement from the proposed LF-S framework. The performance considered here include both classification performance and computational efficiency. We use the existing HL-SOT approach as a baseline. Since the HL-SOT approach used terms' document frequencies (D-F) (Manning et al., 2008) algorithm to select features to build the globally unified index term space, employing the same DF feature selection algorithm we apply the proposed LFS framework on the HL-SOT approach and call the implemented method \"DF-SOT\". The only difference between HL-SOT and DF-SOT is the index term space for each node of SOT, i.e., in the HL-SOT all the nodes using the globally unified index term space while in the DF-SOT each node respectively using a locally customized index term space. In this way, the performance difference between the two methods will indicate the effect of the proposed LFS framework.",
"cite_spans": [
{
"start": 324,
"end": 346,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "5.3"
},
{
"text": "We conduct experiments to investigate whether the classification performance of the HL-SOT can be improved when it is implemented with the LFS framework. Fig. 3 presents the experimental results of classification accuracies between HL-SOT and DF-SOT. In the experiments, the dimensionality d of the globally unified index term space of the HL-SOT approach is set to 270, which is large enough for the HL-SOT approach to reach its best performance level. The feature selection rate \u03b3 for the locally customized index term space of the DF-SOT approach is set to 0.2 and 0.3, which brings respectively 80% and 70% vocabulary reduction. The value of the corrective step \u03f5 is set to varying from 0.005 to 0.05 with each step of 0.005 so that each running approach can achieve its best performance with a certain value of \u03f5. From Fig. 3 , we can observe that when \u03b3 = 0.2 the DF-SOT approach reaches its best performance with 0.6953 (\u03f5 = 0.02) on O-Loss, 1.5516 (\u03f5 = 0.045) on S-Loss, and 1.0578 (\u03f5 = 0.04) on H-Loss, and that when \u03b3 = 0.3 the DF-SOT approach reaches its best performance with 0.6953 (\u03f5 = 0.015) on O-Loss, 1.5531 (\u03f5 = 0.02) on S-Loss, and 1.0547 (\u03f5 = 0.025) on H-Loss, which outperforms the best performance of the HL-SOT approach on O-Loss 0.6984 (\u03f5 = 0.025), on S-Loss 1.6188 (\u03f5 = 0.025), and on H-Loss 1.0969 (\u03f5 = 0.05). This indicates that with the proposed LFS framework, compared with the HL-SOT approach, the DF-SOT approach generally improves the classification performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 160,
"text": "Fig. 3",
"ref_id": "FIGREF1"
},
{
"start": 819,
"end": 830,
"text": "From Fig. 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison on Classification Performance",
"sec_num": "5.3.1"
},
{
"text": "We conduct further experiments to analyze computational efficiency gained through the proposed LFS framework. All the experiments are conducted on a normal personal computer containing an Intel Pentium D CPU (2.4 GHz, Dual Core) and 4G memory. Fig. 4 summarizes the computational time consumed by experiment runs respectively for HL-SOT (d = 270) and DF-SOT (\u03b3 = 0.2 and \u03b3 = 0.3). From Fig. 4 , we can observe that the HL-SOT approach consumes 15917695 ms to finish an experimental run, although the DF-SOT approach only takes respectively 2.29% (with \u03b3 = 0.2 ) and 4.91% (with \u03b3 = 0.2 ) of computational time as the existing HL-SOT approach consumes and achieves even better classification performance than the HL-SOT approach (see Fig.3 ). This confirms that much computational efficiency can be gained for the HL-SOT approach to be implemented in the LFS framework while better classification performance is ensured. Since the computational complexity of each node classifier of DF-SOT is the same as HL-SOT, the computational efficiency gained from the proposed LFS framework should be attributed to the dimension reduction of the index term space.",
"cite_spans": [],
"ref_spans": [
{
"start": 244,
"end": 250,
"text": "Fig. 4",
"ref_id": "FIGREF2"
},
{
"start": 386,
"end": 392,
"text": "Fig. 4",
"ref_id": "FIGREF2"
},
{
"start": 733,
"end": 738,
"text": "Fig.3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison on Computational Efficiency",
"sec_num": "5.3.2"
},
{
"text": "The proposed LFS framework for the HL-SOT approach can employ various feature selection algorithms to select local features for each individual node. In this section, we conduct intensive experiments to comparatively study five classic feature selection algorithms employed within the LFS framework. The five employed feature selection algorithms are respectively document frequency (Manning et al., 2008) based feature selection algorithm, mutual information (MI) (Manning et al., 2008; Church and Hanks, 1990) based feature selection algorithm, \u03c7 2 -statistic (CHI) (Manning et al., 2008) based feature selection algorithm, information gain (IG) (Mitchell, 1997) based feature selection algorithm as well as term strength (TS) (Wilbur and Sirotkin, 1992) based feature selection algorithm 5 . In the experiments, the feature selection rate \u03b3 is set to 0.2 and 0.3 respectively. The value of the corrective step \u03f5 varies from 0.005 to 0.05 with each step of 0.005. The experimental results are summarized in Fig. 5. From Fig. 5 it is observed that DF, MI, and TS feature selection algorithms achieve generally better performances than CHI and IG feature selection algorithms when they are employed in the proposed LFS framework. Specifically, the TS algorithm is generally the best among the five employed algorithms while the DF algorithm can also achieve as comparable good performance as the TS algorithm does. This is due to that both the TS and the DF algorithms favor high frequency terms and vocabularies used in customer reviews on a specific product are usually overlapping. When \u03b3 = 0.3, it can be also observed that the MI algorithm achieves as comparable good performance as the TS algorithm does. This is because, in customer reviews, although some vocabularies are rarely used they always occur as significant features in some specific categories. For example, \"ergonomics\" is a rare term but almost always appears in the class of \"design and usability\". Therefore, the MI algorithm can also achieve relatively better performance through favoring rare terms that always co-occur with specific classes.",
"cite_spans": [
{
"start": 383,
"end": 405,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 465,
"end": 487,
"text": "(Manning et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 488,
"end": 511,
"text": "Church and Hanks, 1990)",
"ref_id": "BIBREF1"
},
{
"start": 568,
"end": 590,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 648,
"end": 664,
"text": "(Mitchell, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 729,
"end": 756,
"text": "(Wilbur and Sirotkin, 1992)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 1009,
"end": 1028,
"text": "Fig. 5. From Fig. 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparative Study on Feature Selection Algorithms",
"sec_num": "5.4"
},
{
"text": "In this paper, we propose a LFS framework tailored to the HL-SOT approach to sentiment analysis. In the proposed LFS framework, significant feature terms of each node can be selected to construct the locally customized index term space for the node so that the classification performance and computational efficiency of the existing HL-SOT approach are improved. The effectiveness of the proposed LFS is validated against a humanlabeled data set. Further comparative study on five employed feature selection algorithms within the proposed LFS framework indicates that the TS, DF, and MI algorithms achieve generally better performance than the CHI and IG algorithms. Among the five employed algorithms, the TS algorithm is the best to be employed by the proposed LFS framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Although the proposed LFS framework shows its effectiveness of improving on the HL-SOT approach, its improvement on the classification performance is not so obvious compared with its much improvement on computational efficiency. Due to the limited number of instances in the training data set, the classification performance still suffers from the problem that unobserved terms appear in testing cases. This problem is inherently raised by the bag-of-word model. A conceptbased indexing scheme that can infer concepts of unobserved terms might alleviate the problem. We plan to investigate on this issue in the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "If specified otherwise in the following of this paper the term \"node\" refers to the classifier of the node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper, we also call it \"local scope\" for short.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since the three loss functions are respectively welldefined by each formula and self-explained by their names, due to the space limitation, we do not present more explanation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to the space limitation, details of the studied feature selection algorithms are not reviewed here. The mechanism of each algorithm can be read in the related references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for the helpful comments on the manuscript. This work is funded by the Research Council of Norway under the VERDIKT research programme (Project No.: 183337).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mining wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Andreevskaia",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Andreevskaia and Sabine Bergler. 2006. Min- ing wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses. In Proceedings of 11th Conference of the European Chapter of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Sentiment polarity identification in financial news: A cohesionbased approach",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Devitt",
"suffix": ""
},
{
"first": "Khurshid",
"middle": [],
"last": "Ahmad",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of 45th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Devitt and Khurshid Ahmad. 2007. Sentiment polarity identification in financial news: A cohesion- based approach. In Proceedings of 45th Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The utility of linguistic rules in opinion mining",
"authors": [
{
"first": "Xiaowen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th Annual International ACM SIGIR Conference on Research and development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaowen Ding and Bing Liu. 2007. The utility of lin- guistic rules in opinion mining. In Proceedings of the 30th Annual International ACM SIGIR Confer- ence on Research and development in Information Retrieval.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Determining the semantic orientation of terms through gloss classification",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 14th ACM Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2005. Deter- mining the semantic orientation of terms through gloss classification. In Proceedings of 14th ACM Conference on Information and Knowledge Man- agement.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sentiwordnet: A publicly available lexical resource for opinion mining",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of 5th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. Senti- wordnet: A publicly available lexical resource for opinion mining. In Proceedings of 5th International Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Predicting the semantic orientation of adjectives",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of 35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjec- tives. In Proceedings of 35th Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Effects of adjective orientation and gradability on sentence subjectivity",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Janyce",
"middle": [
"M"
],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasileios Hatzivassiloglou and Janyce M. Wiebe. 2000. Effects of adjective orientation and grad- ability on sentence subjectivity. In Proceedings of 18th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 10th ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and sum- marizing customer reviews. In Proceedings of 10th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using WordNet to measure semantic orientation of adjectives",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "R",
"middle": [
"Ort"
],
"last": "Marx",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Mokken",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of 4th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Kamps, Maarten Marx, R. ort. Mokken, and Maarten de Rijke. 2004. Using WordNet to mea- sure semantic orientation of adjectives. In Proceed- ings of 4th International Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Opinion observer: analyzing and comparing opinions on the web",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Junsheng",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 14th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opin- ions on the web. In Proceedings of 14th Interna- tional World Wide Web Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "ARSA: a sentiment-aware model for predicting sales performance using blogs",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiangji",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Aijun",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Xiaohui",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th Annual International ACM SIGIR Conference on Research and development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Xiangji Huang, Aijun An, and Xiaohui Yu. 2007. ARSA: a sentiment-aware model for predict- ing sales performance using blogs. In Proceedings of the 30th Annual International ACM SIGIR Con- ference on Research and development in Information Retrieval.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Opinion integration through semi-supervised topic modeling",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of 17th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Lu and Chengxiang Zhai. 2008. Opinion inte- gration through semi-supervised topic modeling. In Proceedings of 17th International World Wide Web Conference.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rated aspect summarization of short comments",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Neel",
"middle": [],
"last": "Sundaresan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of 18th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short com- ments. In Proceedings of 18th International World Wide Web Conference.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "13",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze, 2008. Introduction to Information Retrieval, chapter 13, pages 271-278. Cambridge University Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine Learning",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Mitchell. 1997. Machine Learning. McGraw- Hill.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Extracting product features and opinions from reviews",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In Proceedings of Human Language Technology Con- ference and Empirical Methods in Natural Lan- guage Processing Conference.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Modeling online reviews with multi-grain topic models",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of 17th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Ryan T. McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of 17th International World Wide We- b Conference.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classi- fication of reviews. In Proceedings of 40th Annual Meeting of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sentiment learning on product reviews via sentiment ontology tree",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"Atle"
],
"last": "Gulla",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wei and Jon Atle Gulla. 2010. Sentiment learning on product reviews via sentiment ontology tree. In Proceedings of 48th Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using appraisal taxonomies for sentiment analysis",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Whitelaw",
"suffix": ""
},
{
"first": "Navendu",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 14th ACM Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Whitelaw, Navendu Garg, and Shlomo Arga- mon. 2005. Using appraisal taxonomies for senti- ment analysis. In Proceedings of 14th ACM Confer- ence on Information and Knowledge Management.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The automatic identification of stop words",
"authors": [
{
"first": "J",
"middle": [
"W"
],
"last": "Wilbur",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sirotkin",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of the American Society for Information Science",
"volume": "18",
"issue": "",
"pages": "45--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. W. Wilbur and K. Sirotkin. 1992. The automatic i- dentification of stop words. Journal of the American Society for Information Science, 18:45-55.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recognizing contextual polarity in phraselevel sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Proceedings of Hu- man Language Technology Conference and Empir- ical Methods in Natural Language Processing Con- ference.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 8th Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. To- wards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of 8th Conference on Em- pirical Methods in Natural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "All local hierarchies of the example SOT: the grey nodes sharing the same parent node in dashed line are called on the same local hierarchy under the parent node all instances in the training data set D that contain any label of nodes on the local hierarchy \u2206:",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Classification Performance (A Smaller Loss Value Means Better Classification Performance)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Time Consuming (ms) (DF)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "O-Loss (\u03b3 = 0.3) 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0S-Loss (\u03b3 = 0.3) 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0H-Loss (\u03b3 = 0.3)Figure 5: Comparative Performances on the Employed Feature Selection Algorithms",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}