{ "paper_id": "I08-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:40:37.099597Z" }, "title": "Bootstrapping Both Product Features and Opinion Words from Chinese Customer Reviews with Cross-Inducing 1", "authors": [ { "first": "Bo", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "postCode": "100871", "settlement": "Beijing", "country": "China" } }, "email": "wangbo@pku.edu.cn" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Peking University", "location": { "postCode": "100871", "settlement": "Beijing", "country": "China" } }, "email": "wanghf@pku.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We consider the problem of 1 identifying product features and opinion words in a unified process from Chinese customer reviews when only a much small seed set of opinion words is available. In particular, we consider a problem setting motivated by the task of identifying product features with opinion words and learning opinion words through features alternately and iteratively. In customer reviews, opinion words usually have a close relationship with product features, and the association between them is measured by a revised formula of mutual information in this paper. A bootstrapping iterative learning strategy is proposed to alternately both of them. A linguistic rule is adopted to identify lowfrequent features and opinion words. Furthermore, a mapping function from opinion words to features is proposed to identify implicit features in sentence. Empirical results on three kinds of product reviews indicate the effectiveness of our method.", "pdf_parse": { "paper_id": "I08-1038", "_pdf_hash": "", "abstract": [ { "text": "We consider the problem of 1 identifying product features and opinion words in a unified process from Chinese customer reviews when only a much small seed set of opinion words is available. In particular, we consider a problem setting motivated by the task of identifying product features with opinion words and learning opinion words through features alternately and iteratively. In customer reviews, opinion words usually have a close relationship with product features, and the association between them is measured by a revised formula of mutual information in this paper. A bootstrapping iterative learning strategy is proposed to alternately both of them. A linguistic rule is adopted to identify lowfrequent features and opinion words. Furthermore, a mapping function from opinion words to features is proposed to identify implicit features in sentence. Empirical results on three kinds of product reviews indicate the effectiveness of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the rapid expansion of network application, more and more customer reviews are available online, which are beneficial for product merchants to track the viewpoint of old customers and to assist potential customers to purchase products. However, it's time-consuming to read all reviews in person. As a result, it's significant to mine customer reviews automatically and to provide users with opinion summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In reality, product features and opinion words play the most important role in mining opinions of customers. One customer review on some cell phone is given as follows: (a) \"\u5916\u578b\u6f02\u4eae\uff0c\u5c4f\u5e55\u5927\uff0c\u62cd\u7167\u6548\u679c\u597d\u3002\"(The appearance is beautiful, the screen is big and the photo effect is OK.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Product features are usually nouns such as \"\u5916 \u578b \" (appearance) and \" \u5c4f \u5e55 \" (screen) or noun phrases such as \"\u62cd\u7167\u6548\u679c\" (photo effect) expressing which attributes the customers are mostly concerned. Opinion words (opword is short for \"opinion word\") are generally adjectives used to express opinions of customers such as \"\u6f02\u4eae\" (beautiful), \"\u5927\" (big) and \"\u597d\" (well). As the core part of an opinion mining system, this paper is concentrated on identifying both product features and opinion words in Chinese customer reviews.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is much work on feature extraction and opinion word identification. Hu and Liu (2004) makes use of association rule mining (Agrawal and Srikant, 1994) to extract frequent features, the surrounding adjectives of any extracted feature are considered as opinion words. Popescu and Etzioni (2005) has utilized statistic-based point-wise mutual information (PMI) to extract product features. Based on the association of opinion words with product features, they take the advantage of the syntactic dependencies computed by the MINIPAR parser (Lin, 1998) to identify opinion words. Tur-ney (2002) applied a specific unsupervised learning technique based on the mutual in-formation between document phrases and two seed words \"excellent\" and \"poor\".", "cite_spans": [ { "start": 74, "end": 91, "text": "Hu and Liu (2004)", "ref_id": "BIBREF7" }, { "start": 129, "end": 156, "text": "(Agrawal and Srikant, 1994)", "ref_id": "BIBREF10" }, { "start": 272, "end": 298, "text": "Popescu and Etzioni (2005)", "ref_id": "BIBREF0" }, { "start": 543, "end": 554, "text": "(Lin, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nevertheless, in previous work, identifying product features and opinion words are always considered two separate tasks. Actually, most product features are modified by the surrounding opinion words in customer reviews, thus they are highly context dependent on each other, which is referred to as context-dependency property henceforth. With the co-occurrence characteristic, identifying product features and opinion words could be combined into a unified process. In particular, it is helpful to identify product features by using identified opinion words and vice versa. That implies that such two subtasks can be carried out alternately in a unified process. Since identifying product features are induced by opinion words and vice versa, this is called cross-inducing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As the most important part of a feature-based opinion summary system, this paper focuses on learning product features and opinion words from Chinese customer reviews. Two sub-tasks are involved as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Identifying features and opinion words: Resorting to context-dependency property, a bootstrapping iterative learning strategy is proposed to identify both of them alternately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Identifying implicit features: Implicit features occur frequently in customer reviews. An implicit feature is defined as a feature that does not appear in an opinion sentence. The association between features and opinion words calculated with the revised mutual information is used to identify implicit features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is sketched as follows: Section 2 describes the approach in detail; Experiment in section 3 indicates the effectiveness of our approach. Section 4 presents related work and section 5 concludes and presents the future work. Figure 1 illustrates the framework of an opinion summary framework, the principal parts related to this paper are shown in bold. The first phase \"identifying features and opinion words\", works iteratively to identify features with the opinion words identified and learn opinion words through the product features identified alternately. Then, one linguistic rule is used to identify low-frequent features and opinion words. After that, a mapping function is designed to identify implicit features. ", "cite_spans": [], "ref_spans": [ { "start": 234, "end": 242, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Product features and opinion words are highly context-dependent on each other in customer reviews, i.e., the feature \"\u673a\u8eab\" (body) for digital camera often co-occur with some opinion words such as \"\u5927\" (big) or \"\u5c0f\u5de7\" (delicate) while the feature \"\u6027\u4ef7\u6bd4\" (the proportion of performance to price) often co-occurs with the opinion word \"\u9ad8\" (high).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Learning Strategy", "sec_num": "2.1" }, { "text": "Product features can be identified resorting to the surrounding opinion words identified before and vice versa. A bootstrapping method that works iteratively is proposed in algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Learning Strategy", "sec_num": "2.1" }, { "text": "Algorithm 1 works as follows: given the seed opinion words and all the reviews, all noun phrases (noun phrases in the form \"noun+\") form CandFe-aLex (the set of feature candidates) and all adjectives compose of CandOpLex (the set of the candidates of opinion words). FeaLex and subtracted from CandFeaLex. Similarly, opinion words are processed in this way, but the scores are related to features in ResFeaLex. The iterative process continues until neither Res-FeaLex nor ResOpLex is altered. Any feature candidate and opinion word candidate, whose relative distance in sentence is less than or equal to the specified window size Minimum-Offset, are regarded to co-occur with each other. The association between them is calculated by the revised mutual information denoted by RMI, which will be described in detail in the following section and employed to identify implicit features in sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Iterative Learning Strategy", "sec_num": "2.1" }, { "text": "In customer reviews, features and opinion words usually co-occur frequently, features are usually modified by the surrounding opinion words. If the absolute value of the relative distance in a sentence for a feature and an opinion word is less than Minimum-Offset, they are considered contextdependent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revised Mutual Information", "sec_num": "2.2" }, { "text": "Many methods have been proposed to measure the co-occurrence relation between two words such as \u03c7 2 (Church and Mercer,1993) , mutual information (Church and Hanks, 1989; Pantel and Lin, 2002) , t-test (Church and Hanks, 1989) , and loglikelihood (Dunning,1993) . In this paper a revised formula of mutual information is used to measure the association since mutual information of a lowfrequency word pair tends to be very high. Table 1 gives the contingency table for two words or phrases w 1 and w 2 , where A is the number of reviews where w 1 and w 2 co-occur; B indicates the number of reviews where w 1 occurs but does not co-occur with w 2 ; C denotes the number of reviews where w 2 occurs but does not co-occur with w 1 ; D is number of reviews where neither w 1 nor w 2 occurs; N = A + B + C + D.", "cite_spans": [ { "start": 100, "end": 124, "text": "(Church and Mercer,1993)", "ref_id": null }, { "start": 146, "end": 170, "text": "(Church and Hanks, 1989;", "ref_id": "BIBREF5" }, { "start": 171, "end": 192, "text": "Pantel and Lin, 2002)", "ref_id": "BIBREF8" }, { "start": 202, "end": 226, "text": "(Church and Hanks, 1989)", "ref_id": "BIBREF5" }, { "start": 247, "end": 261, "text": "(Dunning,1993)", "ref_id": null } ], "ref_spans": [ { "start": 429, "end": 436, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Revised Mutual Information", "sec_num": "2.2" }, { "text": "With the table, the revised formula of mutual information is designed to calculate the association of w 1 with w 2 as formula (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Revised Mutual Information", "sec_num": "2.2" }, { "text": "A B ~w 1 C D Table 1: Contingency table 1 2 1 2 1 2 ", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 58, "text": "Table 1: Contingency table 1 2 1 2 1 2", "ref_id": null } ], "eq_spans": [], "section": "w 2~w2 w 1", "sec_num": null }, { "text": "1 2 ( , ) ( , ) ( , ) log ( ) ( ) p w w RMI w w freq w w p w p w = \u00d7 i log ( ) ( N A A ) A B A C \u00d7 = \u00d7 + \u00d7 + (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "w 2~w2 w 1", "sec_num": null }, { "text": "In Chinese reviews, one linguistic rule \"noun+ adverb* adjective+\" occurs frequently and most of the instances of the rule are used to express positive or negative opinions on some features, i.e., \"\u673a \u8eab/noun \u6bd4\u8f83/adverb \u5c0f\u5de7/adjective\" (The body is rather delicate) , where each Chinese word and its part-of-speech is separated by the symbol \"/\". Intuitively, this linguistic rule can be used to improve the output of the iterative learning. For each instance of the rule, if \"noun+\" exists in Res-FeaLex, the \"adjective\" part would be added to ResOpLex, and if \"adjective+\" exists in ResOpLex, the noun phrase \"noun+\" part will be added to ResFeaLex. After that, most low-frequent features and opinion words will be recognized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying Low-Frequent Features and Opinion Words", "sec_num": "2.3" }, { "text": "The context-dependency property indicates the context association between product features and opinion words. As a result, with the revised mutual information, the implicit features can be deduced from opinion words. A mapping function f: opword feature is used to deduce the mapping feature for opword , where f(opword) is defined as the feature with the largest association with opinion word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying Implicit Features", "sec_num": "2.4" }, { "text": "If an opinion sentence contains opinion words, but it does not have any explicit features, the mapping function f: opword feature is employed to generate the implicit feature for each opinion word and the feature is considered as an implicit feature in the opinion sentence. Two instances are given in (b) and (c), where the implicit features are inserted in suitable positions and they are separated in parentheses. Since f (\"\u6f02\u4eae\" (beautiful)) = \"\u5916\u89c2\" (appearance) and f (\"\u65f6\u5c1a\" (fashionable)) = \"\u5916 \u89c2\" (appearance), \"\u5916\u89c2\" (appearance) is an implicit feature in (b). Similarly, the implicit features in (c) are \"\u6027\u80fd\" (performance) and \"\u56fe\u50cf\" (picture).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying Implicit Features", "sec_num": "2.4" }, { "text": "(b) (\u5916\u89c2)\u6f02\u4eae\u800c\u4e14(\u5916\u89c2)\u65f6\u5c1a\u3002It's (appearance) beautiful and (appearance) fashionable. (c) (\u6027\u80fd)\u5f88 \u7a33\u5b9a \uff0c\u800c\u4e14(\u56fe\u50cf)\u5f88 \u6e05\u6670 \u3002It's (performance) very stable and (picture) very clear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying Implicit Features", "sec_num": "2.4" }, { "text": "We have gathered customer reviews of three kinds of electronic products from http://it168.com: digital camera, cell-phone and tablet. The first 300 reviews for each kind of them are downloaded. One annotator was asked to label each sentence with product features (including implicit features) and opinion words. Table 2 . Annotation set for product features and opinion words Unlike English, Chinese are not separated by any symbol. Therefore, the reviews are tokenized and tagged with part-of-speech by a tool ICTCLAS 2 .One example of the output of this tool is as (d).", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 319, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiment 3.1 Data Collection", "sec_num": "3" }, { "text": "(d) \u5f00\u673a/n \u901f\u5ea6/n \u8fd8/d \u6ee1/d \u5feb/a \uff0c/w \u955c\u5934 /n \u4fdd\u62a4\u76d6/n \u62c9\u5f00/v \u5c31/d \u53ef\u4ee5/v \u8fdb\u5165/v \u62cd\u6444/n \u72b6\u6001/n \uff0c/w \u6a21\u5f0f/n \u9009\u62e9/vn \u5207\u6362 /vn \u4e5f/d \u5f88/d \u65b9\u4fbf/a \u3002/w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 3.1 Data Collection", "sec_num": "3" }, { "text": "The seed opinion words employed in the iterative learning are: \" \u6e05\u6670 \" (clear), \" \u5feb \" (quick), \"\u767d\" (white), \"\u5dee\u52b2\" (weak). \"\u597d\" (good), \"\u4e0d\u9519\" (good), \"\u9ad8\" (high), \"\u5c0f\" (little), \"\u591a\" (many), \" \u957f \" (long). Empirically, Threshold feature and Threshold opword in Algorithm 1 is set to 0.2, Minimum-Offset is set to 4. Table 4 . Evaluation of iterative learning (the upper) and the combination of iterative learning and the linguistic rule (the lower).", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 314, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experiment 3.1 Data Collection", "sec_num": "3" }, { "text": "As Hu and Liu (2004) , the features mined form the result set while the features in the manually annotated corpus construct the answer set. With the two sets, precision, recall and f-score are used to evaluate the experiment result on set level.", "cite_spans": [ { "start": 3, "end": 20, "text": "Hu and Liu (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measurement", "sec_num": "3.2" }, { "text": "In our work, the evaluation is also conducted on sentence for three factors: Firstly, each feature or opinion word may occur many times in reviews but it just occurs once in the corresponding answer set; Secondly, implicit features should be evaluated on sentence; Besides, to generate an opinion summary, the features and the opinion words should be identified for each opinion sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measurement", "sec_num": "3.2" }, { "text": "On sentence, the features and opinion words identified for each opinion sentence are compared with the annotation result in the corresponding sentence. Precision, recall and f-score are also used to measure the performance. Hu and Liu (2004) have adopted associate rule mining to mine opinion features from customer reviews in English. Since the original corpus and source code is not available for us, in order to make comparison with theirs, we have reimplemented their algorithm, which is denoted as apriori method as follows. To be pointed out is that, the two pruning techniques proposed in Hu and Liu (2004) : compactness pruning and redundancy pruning, were included in our experiment. The evaluation on our test data is listed in table 3. The row indexed by average denotes the average performance of the corresponding column and each entry in it is bold. Table 4 shows our testing result on the same data, the upper value in each entry presents the result for iterative learning strategy while the lower values denote that for the combination of iterative learning and the linguistic rule. The average row shows the average performance for the corresponding columns and each entry in the row is shown in bold.", "cite_spans": [ { "start": 224, "end": 241, "text": "Hu and Liu (2004)", "ref_id": "BIBREF7" }, { "start": 596, "end": 613, "text": "Hu and Liu (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 864, "end": 871, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Measurement", "sec_num": "3.2" }, { "text": "On feature, the average precision, recall and fscore on set or sentence increase according to the order apriori < iterative < ite+rule, where apriori indicates Hu and Liu's method, iterative represents iterative strategy and iterative+rule denotes the combination of iterative strategy and the linguistic rule. The increase range from apriori to itera-tive+rule of f-score on set gets to 22.65% while on sentence it exceeds 10%. The main reason for the poor performance on set for apriori is that many common words such as \"\u7535\u8111\" (computer), \"\u4e2d\u56fd\" (China) and \"\u65f6\u95f4\" (time of use) with high frequency are extracted as features. Moreover, the poor performance on sentence for apriori method is due to that it can't identify implicit features. Furthermore, the increase in f-score from iterative to ite+rule on set and on sentence shows the performance can be enhanced by the linguistic rule. Table 4 also shows that the performance in learning opinion words has been improved after the linguistic rule has been used. On set, the average precision increases from 84.97% to 85.51% while the average recall from 33.15% to 54.57%. Accordingly, the average f-score increase significantly by about 18.91%.", "cite_spans": [], "ref_spans": [ { "start": 886, "end": 893, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "On sentence, although there is a slow decrease in the average precision, there is a dramatic increase in the average recall, thus the average fscore has increased from 53.91% to 72.79%. Furthermore, the best f-score (66.54%) on set and the best f-score (72.79%) on sentence indicate the effectiveness of ite+rule on identifying opinion words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Our work is much related to Hu's system (Hu and Liu,2004) , in which association rule mining is used to extract frequent review noun phrase as features. After that, two pruning techniques: compactness pruning and redundancy pruning, are utilized. Frequent features are used to find potential opinion words (adjectives) and WordNet synonyms/antonyms in conjunction with a set of seed words are used in order to find actual opinion words. Finally, opinion words are used to extract associated infrequent features. The system only extracts explicit features. Our work differs from hers at two aspects: (1) their method can't identify implicit features which occur frequently in opinion sentences; (2) Product features and opinion words are identified on two separate steps in Hu's system but they are learned in a unified process here and induced by each other in this paper. Popescu and Etzioni (2005) has used web-based point-wise mutual information (PMI) to extract product features and use the identified features to identify potential opinion phrases with cooccurrence association. They take advantage of the syntactic dependencies computed by the MINIPAR parser. If an explicit feature is found in a sentence, 10 extraction rules are applied to find the heads of potential opinion phrases. Each head word together with its modifier is returned as a potential opinion phrase. Our work is different from theirs on two aspects: (1) Product features and opinion words are identified separately but they are learned simultaneously and are boosted by each other here. (2) They have utilized a syntactic parser MINIPAR, but there's no syntactic parser available in Chinese, thus the requirement of our algorithm is only a small seed opinion word lexicon. Although cooccurrence association is used to derive opinion words from explicit features in their work, the way how co-occurrence association is represented is different. Besides, the two sub-tasks are boosted by each other in this paper.", "cite_spans": [ { "start": 40, "end": 57, "text": "(Hu and Liu,2004)", "ref_id": null }, { "start": 873, "end": 899, "text": "Popescu and Etzioni (2005)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "On identifying opinion words, Morinaga et al (2002) has utilized information gain to extract classification features with a supervised method; Hatzivassiloglou and Wiebe (1997) used textual junctions such as \"fair and legitimate\" or \"simplistic but well-received\" to separate similarity-and oppositely-connoted words; Other methods are present in Gamon and Aue, 2005; Wilson et al, 2006) The principal difference from previous work is that, they have considered extracting opinion words as a separate work but we have combined identifying features and opinion words in a unified process. Besides, the opinion words are identified for sentences but in their work they are identified for reviews.", "cite_spans": [ { "start": 30, "end": 51, "text": "Morinaga et al (2002)", "ref_id": "BIBREF11" }, { "start": 347, "end": 367, "text": "Gamon and Aue, 2005;", "ref_id": "BIBREF6" }, { "start": 368, "end": 387, "text": "Wilson et al, 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, identifying product features and opinion words are induced by each other and are combined in a unified process. An iterative learn-ing strategy based on context-dependence property is proposed to learn product features and opinion words alternately, where the final feature lexicon and opinion word lexicon are identified with very few knowledge (only ten seed opinion words) and augmented by each other alternately. A revised formula of mutual information is used to calculate the association between each feature and opinion word. A linguistic rule is utilized to recall lowfrequent features and opinion words. Besides, a mapping function is designed to identify implicit features in sentence. In addition to evaluating the result on set, the experiment is evaluated on sentence. Empirical result indicates that the performance of iterative learning strategy is better than apriori method and that features and opinion words can be identified with cross-inducing effectively. Furthermore, the evaluation on sentence shows the effectiveness in identifying implicit features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In future, we will learn the semantic orientation of each opinion word, calculate the polarity of each subjective sentence, and then construct a featurebased summary system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Supported by National Natural Science Foundation of China under grant No.60675035 and Beijing Natural Science Foundation under grant No.4072012", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.nlp.org.cn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Extracting Product Features and Opinions from Reviews", "authors": [ { "first": "Ana", "middle": [ "Maria" ], "last": "Popescu", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana Maria Popescu and Oren Etzioni. 2005. Extracting Product Features and Opinions from Reviews. Pro- ceedings of HLT-EMNLP (2005)", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dependency-Based Evaluation of MINIPAR", "authors": [ { "first": "De-Kang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Workshop on the Evaluation of Parsing Systems", "volume": "", "issue": "", "pages": "298--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "De-Kang Lin. 1998. Dependency-Based Evaluation of MINIPAR. In:Proceedings of the Workshop on the Evaluation of Parsing Systems, Granada, Spain, 1998, 298\uff5e312", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning Subjective Nouns Using Extraction Pattern Bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2003, "venue": "Seventh Conference on Natural Language Learning (CoNLL-03). ACL SIGNLL. Pages", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning Subjective Nouns Using Extraction Pattern Bootstrapping. Seventh Conference on Natural Lan- guage Learning (CoNLL-03). ACL SIGNLL. Pages 25-32.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning Extraction Patterns for Subjective Expressions", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2003, "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP-03). ACL SIGDAT", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Janyce Wiebe. 2003. Learning Extrac- tion Patterns for Subjective Expressions. Conference on Empirical Methods in Natural Language Process- ing (EMNLP-03). ACL SIGDAT. 2003, 105-112.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introduction to the special issue on computational linguistics using large corpora", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Robert L. Mercer. 1993. Introduction to the special issue on computational linguistics using large corpora. Computational Lin- guistics 19:1-24", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word Association Norms, Mutual Information and Lexicography", "authors": [ { "first": "Kenneth", "middle": [ "Ward" ], "last": "Church", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the 26th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Ward Church and Patrick Hanks. 1989. Word Association Norms, Mutual Information and Lexi- cography. Proceedings of the 26th Annual Confer- ence of the Association for Computational Linguis- tics(1989).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatic identification of sentiment vocabulary: exploiting low association with known sentiment terms", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Aue", "suffix": "" } ], "year": 2005, "venue": "ACL 2005 Workshop on Feature Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon and Anthony Aue. 2005. Automatic identification of sentiment vocabulary: exploiting low association with known sentiment terms. In :ACL 2005 Workshop on Feature Engineering,2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mining Opinion Features in Customer Reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Nineteeth National Conference on Artificial Intellgience (AAAI-2004)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining Opinion Fea- tures in Customer Reviews. Proceedings of Nineteeth National Conference on Artificial Intellgience (AAAI-2004), San Jose, USA, July 2004.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Document Clustering with Committees", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACM Conference on Research and Development in Information Retrieval (SIGIR-02)", "volume": "", "issue": "", "pages": "199--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel and Dekang Lin. 2002. Document Clus- tering with Committees. In Proceedings of ACM Conference on Research and Development in Infor- mation Retrieval (SIGIR-02). pp. 199-206. Tampere, Finland.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "417--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. 2002. Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Clas- sification of Reviews. ACL 2002: 417-424", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fast algorithm for mining association rules. VLDB'94", "authors": [ { "first": "Rakesh", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Ramakrishan", "middle": [], "last": "Srikant", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakesh Agrawal and Ramakrishan Srikant. 1994. Fast algorithm for mining association rules. VLDB'94, 1994.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mining Product Reputations on the WEB", "authors": [ { "first": "Satoshi", "middle": [], "last": "Morinaga", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Yamanishi", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Tateishi", "suffix": "" }, { "first": "Toshikazu", "middle": [], "last": "Fukushima", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 8th ACM SIGKDD International Conference on Knowledge. Discover and Data Mining", "volume": "", "issue": "", "pages": "341--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Morinaga, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining Product Repu- tations on the WEB, Proceedings of 8th ACM SIGKDD International Conference on Knowledge. Discover and Data Mining, (2002) 341-349", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Accurate methods for the statistics of surprise and coincidence", "authors": [ { "first": "Ted", "middle": [], "last": "Dunning", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "", "pages": "61--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics 19:61-74", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Recognizing strong and weak opinion clauses", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2006, "venue": "Computational Intelligence", "volume": "22", "issue": "2", "pages": "73--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson , Janyce Wiebe, and Rebecca Hwa. 2006. Recognizing strong and weak opinion clauses. Computational Intelligence 22 (2): 73-99.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The framework of an opinion summary system", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "The sets ResFeaLex and ResOpLex are used to store final features and opinion words. Initially, ResFeaLex is set empty while ResOpLex is composed of all the seed opinion words. At each iterative step, each feature candidate in CandFeaLex is scored by its contextdependent association with each opword in ResO-pLex, the candidate whose score is above the prespecified threshold Threshold feature is added to Res Algorithm 1. Bootstrap learning product features and opinion words with cross-inducing Bootstrap-Learning (ReviewData, SeedOpLex, Threshold feature , Threshold opword )", "num": null, "html": null, "content": "
1 Parse(ReviewData);
2 ResFeaLex = {}, ResOpLex = SeedOpLex;
3 CandFeaLex = all noun phrases in ReviewData;
4 CandOpLex = all adjectives in ReviewData;
5 while (CandFeaLex\u2260{} && CanOpLex\u2260{})
6do for each candfea\u2208CandFeaLex
7do for each opword\u2208ResOpLex
8do calculate RMI(candfea,opword) with ReviewData;
9score(canfea)=\u03a3 opword \u2208 ResOpLex RMI(candfea,opword)/|ResOpLex|;
10sort CandFeaLex by score;
11for each candfea\u2208CandFeaLex
12do if (score(candfea)> Threshold feature )
13then ResFeaLex=ResFeaLex+{candfea};
14CanFeaLex=CandFeaLex -{candfea};
15for each candop\u2208CandOpLex
16do for each feature\u2208ResFeaLex
17do calculate RMI(candop,feature) with D;
18score(candop)=\u03a3 feature \u2208 ResFeaLex RMI(feature,candop)/|ResFeaLex| ;
19sort CandOpLex by score;
20for each candop\u2208CandOpLex
21do if (score (candop)>Threshold opword )
22then ResOpLex=ResOpLex+{candop };
23CanOpLex=CandOpLex -{candop};
24
", "type_str": "table" }, "TABREF1": { "text": "The annotation set for features and opinion words are shown in table 2.", "num": null, "html": null, "content": "
ProductNo. of Fea-No. of Opin-
Nameturesion Words
digital camera13597
cell-phone155125
tablet9683
", "type_str": "table" } } } }