|
{ |
|
"paper_id": "O09-6003", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:10:56.183695Z" |
|
}, |
|
"title": "Identification of Opinion Holders", |
|
"authors": [ |
|
{ |
|
"first": "Lun-Wei", |
|
"middle": [], |
|
"last": "Ku", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"addrLine": "No. 1, Sec. 4, Roosevelt Road", |
|
"postCode": "10617", |
|
"settlement": "Taipei, Taiwan" |
|
} |
|
}, |
|
"email": "lwku@nlg.csie.ntu.edu.tw" |
|
}, |
|
{ |
|
"first": "Chia-Ying", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"addrLine": "No. 1, Sec. 4, Roosevelt Road", |
|
"postCode": "10617", |
|
"settlement": "Taipei, Taiwan" |
|
} |
|
}, |
|
"email": "cylee@nlg.csie.ntu.edu.tw" |
|
}, |
|
{ |
|
"first": "Hsin-Hsi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Taiwan University", |
|
"location": { |
|
"addrLine": "No. 1, Sec. 4, Roosevelt Road", |
|
"postCode": "10617", |
|
"settlement": "Taipei, Taiwan" |
|
} |
|
}, |
|
"email": "hhchen@csie.ntu.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Opinion holder identification aims to extract entities that express opinions in sentences. In this paper, opinion holder identification is divided into two subtasks: author's opinion recognition and opinion holder labeling. Support vector machine (SVM) is adopted to recognize author's opinions, and conditional random field algorithm (CRF) is utilized to label opinion holders. New features are proposed for both methods. Our method achieves an f-score of 0.734 in the NTCIR7 MOAT task on the Traditional Chinese side, which is the best performance among results of machine learning methods proposed by participants, and also it is close to the best performance of this task. In addition, inconsistent annotations of opinion holders are analyzed, along with the best way to utilize the training instances with inconsistent annotations being proposed.", |
|
"pdf_parse": { |
|
"paper_id": "O09-6003", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Opinion holder identification aims to extract entities that express opinions in sentences. In this paper, opinion holder identification is divided into two subtasks: author's opinion recognition and opinion holder labeling. Support vector machine (SVM) is adopted to recognize author's opinions, and conditional random field algorithm (CRF) is utilized to label opinion holders. New features are proposed for both methods. Our method achieves an f-score of 0.734 in the NTCIR7 MOAT task on the Traditional Chinese side, which is the best performance among results of machine learning methods proposed by participants, and also it is close to the best performance of this task. In addition, inconsistent annotations of opinion holders are analyzed, along with the best way to utilize the training instances with inconsistent annotations being proposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Opinions describe subjective thinking of people. With the blooming of Web 2.0, a large number of free and online articles have become easily accessible. Although people are interested in the shifting of opinions, they cannot read such a large quantity of articles in a short time. Opinion mining can analyze opinions from many information sources automatically and helps extract opinions, along with determining their polarities, strength, holders, and targets. Opinion polarities tell us whether the current opinions are positive, neutral, or negative. The opinion strength then tells us the degree of their attitude, i.e., strong, medium, or weak. Opinion holders are the people who express opinions, while opinion targets are the objects of those opinions. Let us take \"Mr. Wang loves to play baseball\" as an example. In this opinion sentence, its polarity is positive, its strength is strong, the opinion holder is Mr. Wang, and the opinion target is playing baseball. It is an opinion from Mr. Wang that indicates he has a positive attitude towards playing baseball.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Opinion holder identification is useful in knowing who has the same attitude, what kind of issues a specific person cares about, and whether there are different opinions from some specific persons. This technique can also be applied to social network analysis to discover who the opinion leader is. It is also important in an opinion question answering system as it can provide the owner of opinions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "There are three major challenges in opinion holder identification: co-reference resolution, parsing nested opinions, and inconsistent annotation utilization. Like the conventional question answering problem, pronoun-antecedent and zero anaphor problems have to be solved before identifying opinion holders. Nested opinions are common in long sentences. People like to quote opinions of other people to show that they are impartial, but this behavior also implies that they agree with their quotes. In this case, we need to identify both the quoting and the quoted holders for further analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "It is sometimes difficult to determine the holder of an opinion. For example, even though the holder is obvious in the opinion sentence, we may find that he represents some organization and is presenting the organization's opinion. The following sentence is an example: \"According to the media, [the] U.S. and China are discussing the agreement of terminating the usage of nuclear weapons; Becon said they have discussed this issue before.\" In this sentence, \"they\" refers to the U.S. and China. Becon quoted words from the U.S. and China, so this is a nested structure. In addition, although this expression is said by the media and Becon, the holder should be the U.S. and China. These challenges all complicate the annotation process, and a double check and a selection process are necessary when generating the gold standard. Pang and Lee (2008) have mentioned some important research projects in the domain of opinion mining. Kim and Hovy (2004) proposed four elements in opinion mining, including the opinion polarity, the opinion strength, the opinion holder, and the opinion target. Among them, the research for opinion holder identification is new. Previous researchers mainly have proposed two kinds of methods: heuristic rule based and machine learning based methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 830, |
|
"end": 849, |
|
"text": "Pang and Lee (2008)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 931, |
|
"end": 950, |
|
"text": "Kim and Hovy (2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "For heuristic rule based methods, Seki et al. (2009) utilized noun phrases and linguistic features, and adopted SVM to classify opinion holders into authors and non-authors in English and Japanese materials. Xu and Wang (2008) first solved the co-reference resolution, and extracted opinion holders by rules involving punctuation marks, conjunctions, prefixes, suffixes, and opinion operators. They achieved an f-score of 0.825 in the NTCIR7 MOAT task on the Traditional Chinese side, which is the state of the art.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 52, |
|
"text": "Seki et al. (2009)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 226, |
|
"text": "Xu and Wang (2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Heuristic Rule based Methods", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For the machine learning based methods, many researchers have adopted maximum entropy algorithms, SVM, or the conditional random field model. Kim and Hovy (2006) utilized the maximum entropy model to extract opinion holders and targets from news articles. They first found opinion words and labeled semantic roles, then identified the semantic roles that are opinion holders and targets. Kim (2007 Kim ( , 2008 classified opinion holders into authors, simple holders and co-referenced holders, then extracted lexical and syntactic features for SVM to select the best opinion holder. So far, this is the best method for English materials, and it achieved an f-score of 0.346. Wu (2008) used words and parts of speech as features in L2-norm linear SVM to solve this research problem as a similar method for named entity identification. Breck and Choi (2007, 2005) utilized lexical features, syntactic features, dictionary-based features, and dependency features by CRF to identify opinion holders. Meng and Wang (2008) used words, parts of speech, and opinion operators, while Liu and Zhao (2008) extracted parts of speech, semantic features, contextual features, dependency features, and position features by CRF.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 161, |
|
"text": "Kim and Hovy (2006)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 397, |
|
"text": "Kim (2007", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 410, |
|
"text": "Kim ( , 2008", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 684, |
|
"text": "Wu (2008)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 843, |
|
"text": "Breck and", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 861, |
|
"text": "Choi (2007, 2005)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 996, |
|
"end": 1016, |
|
"text": "Meng and Wang (2008)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1075, |
|
"end": 1094, |
|
"text": "Liu and Zhao (2008)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Machine Learning based Methods", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We propose a unique approach that divides opinion holder identification into two tasks: author's opinion recognition and opinion holder labeling. We then find better strategies to perform these two tasks. For author's opinion recognition, we adopt SVM by features such as words and their parts of speech, named entities, punctuation marks, the context, and opinion related information in the current sentence. Among them, some context features (the roles of verbs) and opinion related features (information of positive words, neutral words, negative words, and opinion operators) have not been utilized in opinion holder identification before. Detailed features will be described in Sections 3.2 and 3.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Methods", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Five procedures of opinion holder identification are proposed in this paper, including text pre-processing, author's opinion recognition, opinion holder labeling, post-processing, and result generation. Chinese word segmentation, parts of speech tagging, and named entity recognition are performed in the text pre-processing stage. Then, author's opinions are recognized and opinion holder labeling determines the text segment referring to the holder. We have two strategies for applying the proposed methods of author's opinion recognition and opinion holder labeling. These two strategies are described in Section 3.5. After that, this labeled text segment is processed by the post-processing procedure to generate the final opinion holder. The flowchart describing these five procedures is shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 802, |
|
"end": 810, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Identification", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In the text pre-processing stage, we utilize the Chinese word segmentation system developed by Lo (2008) . We, however, modify its segmentation module and add additional name dictionaries to it. The length limit of the modified Chinese name module is set looser and Japanese family names are added so that the segmentation system can recognize Japanese names, which are usually longer than Chinese names. Occupations, titles, and company names are also added to the dictionary of the segmentation system to provide useful holder relevant information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 104, |
|
"text": "Lo (2008)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text Pre-processing Stage", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Author's opinion recognition finds out whether the opinion holder of the current sentence is the author. In this paper, this task is viewed as a binary classification task and LIBSVM (Chang & Lin, 2001 ) is adopted for classification. The main features extracted for this task are words, parts of speech, named entities, punctuation marks, sentence components, and opinion operators. Table 1 shows all of the features utilized in this task. The lexicon features include Identification of Opinion Holders 387 first person pronouns, which are often utilized by the authors to refer to themselves. The part-of-speech features include general pronouns and personal pronouns, because pronouns usually refer to persons and organizations and they can express opinions. The named-entity features are considered because they can either be the opinion holders or the opinion targets. Is there an exclamation mark (\"\uff01\" or \"!\") ?", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 201, |
|
"text": "(Chang & Lin, 2001", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 391, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "fHasQuestion Is there a question mark (\"\uff1f\"or \"?\") ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "fHasColon Is there a colon (\"\uff1a\"or \":\") ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "fHasLeftQuotation Are there any quotation marks (\"\u300c\" or \"\u3010\" ) ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "fHasRightQuotation Are there any quotation marks (\"\u300d\" or \"\u3011\") ?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition Stage", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The number of Chinese characters fNumWord", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentential fNumChar", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number of Chinese words fNumSubsen", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentential fNumChar", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number of clauses Opinion fOperator1 to 203 Is there an opinion operator?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentential fNumChar", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The punctuation features include punctuation marks that possibly co-occur with opinions. For example, exclamation points and question marks often appear in sentences expressed by people because they can bear sentiment, whereas colons and quotations are usually used to quote expressed words. The sentential features tell the length of the sentences by their composite characters, words, and clauses. We consider these features because we think that authors may need a sentence of a suitable length to express opinions. As to the opinion features, a total of 203 opinion operators, such as \uf96f (say), \u6307\u51fa (point out), and \u8a8d\u70ba (think), are collected manually from the earlier NTCIR corpus (Seki et al., 2008) for this task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 683, |
|
"end": 702, |
|
"text": "(Seki et al., 2008)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentential fNumChar", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Opinion holder labeling finds the text segment that represents the opinion holder. In the beginning, this task is also viewed as a binary classification problem for all words of a sentence, where the decision tree determines whether the current word is part of the opinion holder or not. CHAID decision tree algorithm provided by RapidMiner (Mierswa, Wurst, Klinkenberg, Scholz, & Euler, 2006 ) is adopted. It is a pruned decision tree using the chi-square test.", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 392, |
|
"text": "(Mierswa, Wurst, Klinkenberg, Scholz, & Euler, 2006", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling Stage", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As the other alternative, we view the opinion holder labeling problem as a sequential labeling problem. Therefore, the CRF algorithm (Lafferty, McCallum, & Pereira, 2001 ) is selected to label whether each composite word is a portion of the opinion holder and CRF++ (Kudo, 2003) is adopted for implementation. Features for experiments are listed in Table 2 . Features for opinion holder labeling include words, parts of speech, named entities, punctuation marks, sentential information, contextual information, and opinion related information. Some of the features are the same as those we have selected for the author's opinion recognition. The lexicon feature is the current word to be determined. The part-of-speech features of the current word include its part of speech, and whether it is a noun or a pronoun. The binary properties of being a noun or a pronoun are emphasized here because they are the most commonly seen parts of speech in opinion holders. Punctuation marks also are considered as features here. Sentential features tell the position of the current word in the current sentence. They are included in the feature set because, according to our observations, holders often appear in the beginning or at the end of the sentence. The context features include the information of the nearest verbs with respect to the current word. If the current word is a part of the opinion holder, its nearest verb could be an opinion operator. The collocation of the current word and the nearest verb are considered. For the opinion information, the appearance of opinion operators, positive words, neutral words, and negative words are considered. Here, positive words are words used to express a supportive attitude, such as success, good, etc.; neutral words express an impartial attitude, such as no comment, difficult to say, etc.; negative words express opposite attitude, such as objection, accusation, etc. The occurrence of opinion words may indicate the existence of opinions, and opinions may further indicate the existence of their holders. The method of utilizing contextual and opinion information, along with the features of nouns and pronouns, are first proposed in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 169, |
|
"text": "(Lafferty, McCallum, & Pereira, 2001", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 278, |
|
"text": "(Kudo, 2003)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 356, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling Stage", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The training set for the NTCIR-7 MOAT Task, introduced in Section 4.1, is adopted for extracting training features and for building models. As the size of this set is not large, the co-training method is adopted to improve the performance (Blum & Mitchell, 1998) . Co-training is a semi-supervised learning method that trains models together with labeled and un-labeled materials. In co-training, sentences with high CRF confidence scores are selected, and sentences among them without words that are portions of the opinion holder are dropped. These selected sentences, along with their predicted labels, are fed back to the CRF system as the training sentences in the next iteration. The co-training process is shown in Figure 2 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 262, |
|
"text": "(Blum & Mitchell, 1998)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 722, |
|
"end": 730, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling Stage", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Post-processing includes two steps: processing phrasal opinion holders and recovering named entities. As mentioned above, opinion holder labeling tells whether the current word is a portion of the opinion holder. Nevertheless, opinion holders often are composed of multiple words, and we need to combine these words to propose the final result if the holder is longer than one word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-processing Stage", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "In our experiments for processing phrasal opinion holders, five labels are used to label words in CRF opinion holder labeling: H (head word), I (middle word), T (tail word), S (single word), and O (not opinion portion), abbreviated as HITSO hereafter. Instead, when working with CHAID, the decision tree, it can only generate two labels \"YES\" and \"NO\" to tell whether the current word is a portion of the opinion holder. In this phase, comparing the performances of using different label sets is not the focus, so we just use the label set HITSO. The effect of different labels will be tested later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "Different post-processing rules are applied according to the tagging sets. If words are labeled by the tagging set H, I, T, S, and O by CRF, we use the following rules to combine words to a phrase if necessary:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(1) Find the H labeled word with the highest confidence score in the current sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(2) Combine the sequent I labeled words until a T labeled word is found.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(3) The final opinion holder includes words that begin from the word with label H found in 1to the word with label T found in (2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(4) If all words in the current sentence are all labeled \"O\", the opinion holder will be set to the author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "If words are labeled with the tagging set {YES, NO} by CHAID, we use the following three rules to combine words to a phrase if necessary:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(1) Combine consecutive nouns. For example, \"\u5370\ufa01 (India, Nc) \u7e3d\u7d71 (president, Na) \u74e6\u5e0c\u5fb7 (Abdurrahman Wahid, Nb)\" are combined into one opinion holder phrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(2) Use conjunctions and \"\u3001\" (a mark in Chinese punctuations to list items in a series) to combine nouns into one holder group. For example, \"\u74e6\u5e0c\u5fb7 (Abdurrahman Wahid, Nb) \u3001 (PAUSECATEGORY) \u67ef\uf9f4\u9813 (Clinton, Nb) \u8207 (and, Caa) \u5c0f\u6df5\u60e0\u4e09 (Keizo Obuchi, Nb)\" are combined into one opinion holder group. If people express the same opinion together, they are usually grouped as an opinion holder group in sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "(3) If all words in the current sentence are labeled \"NO,\" the opinion holder will be set to the author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Processing Phrasal Opinion Holders", |
|
"sec_num": "3.4.1" |
|
}, |
|
{ |
|
"text": "Translated named entities tend to be segmented incorrectly in text pre-processing. These segmentation errors will make opinion holders incomplete, so recovering wrongly segmented named entities is necessary. In this paper, we postulate that the number of occurrences of a complete named entity should be the same as its partial sequences. Therefore, we will compare the occurrences of the current holder sequence with the current holder sequence plus its previous/following word to decide whether we should combine them to generate a more complete opinion holder. For example, in the sentence \"Indonesian big man Suharto ruled Indonesia for 32 years,\" the name \"Suharto\" is translated into \"\u8607\u54c8\u6258\". Nevertheless, this name is wrongly segmented into two words \"\u8607\u54c8\" and \"\u6258\". In this case, we will check whether the number of appearance of \"\u8607\u54c8\u6258\" is the same as \"\u8607\u54c8\". If it is, we will combine \"\u8607\u54c8\" and \"\u6258\" into one word \"\u8607\u54c8\u6258\". This process is done iteratively until the numbers of appearance of the current word and that plus the previous/following word are not the same. In this example, we further check the next word \"\u7d71\". In doing so, we will find that the numbers of appearance of \"\u8607\u54c8\u6258\" and \"\u8607\u54c8\u6258\u7d71\" are not the same. Therefore, the recovery process stops and we propose \"\u8607\u54c8\u6258\" as the opinion holder. This process is shown as follows. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recovering Named Entities", |
|
"sec_num": "3.4.2" |
|
}, |
|
{ |
|
"text": "The final step is result generation, which considers the results of the author's opinion recognition and opinion holder labeling. Author's opinion recognition classifies all opinion sentences into author's opinions and non-author's opinions, while opinion holder labeling labels the opinion holder. We propose two result generation strategies, and their flowcharts are shown in Figures 3 and 4 . In Result generation strategy A, we have more confidence in the author's opinions recognized by the author's opinion recognition module. Non-author's opinions are passed to the opinion holder labeling module. If there is an opinion holder labeled by this module, we propose it; otherwise, we propose the author as the opinion holder. In Result generation strategy B, we have more confidence in the non-author's opinions recognized by the author's opinion recognition module. Both kinds of opinions are then passed to the opinion holder labeling module. For the non-author's opinions, we force the opinion holder labeling module to propose an opinion holder by considering the most possible opinion holder among words in the current sentence. For the author's opinions, if there is an opinion holder labeled by this module, we propose it; otherwise, we propose the author as the opinion holder. After the text pre-processing, author's opinion recognition, opinion holder labeling, post-processing, and the result generation, the opinion holder is determined.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 393, |
|
"text": "Figures 3 and 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Result Generation Stage", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "In this section, the experimental corpus and resources are introduced. Results of author's opinion recognition, opinion holder labeling, and the complete opinion holder identification are shown and discussed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment and Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The adopted experimental corpus is the NTCIR-6 Pilot Task & NTCIR-7 MOAT Task Traditional Chinese corpora. NTCIR 2 is one of the three important international evaluative forums. MOAT (Multilingual Opinion Analysis Task) is one of its evaluative tasks (Seki et al. 2008) . The MOAT task provides English, Japanese, Traditional Chinese, and Simplified Chinese materials, which include news articles collected from 1998 to 2001. Labels for relevance, the opinion sentence, the opinion polarity, and the opinion holder are provided by the NTCIR-6 Pilot Task. In addition to these labels, labels for the opinion target are provided later by the NTCIR-7 MOAT Task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 269, |
|
"text": "(Seki et al. 2008)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus and Evaluation Tasks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Each MOAT corpus is classified into training and testing sets. The original purpose of the corpus released prior to the formal run (the training corpus mentioned here) is to provide samples to the participants, so its quantity is comparably small. The NTCIR-7 MOAT training set includes documents of 3 topics, consisting of 1,509 sentences, with 944 of them being opinion sentences; the NTCIR-7 MOAT testing set includes documents of 14 topics, consisting of 4,665 sentences, with 2,174 of them being opinion sentences. The opinion labels are sentence-based. As the size of the NTCIR-7 MOAT training set is small, the NTCIR-6 Pilot Task testing set is added for training in this paper. This set includes documents of 29 topics, consisting of 9,240 sentences, with 5,453 of them being opinion sentences. Labels for opinion sentences and opinion holders in these training and testing sets are utilized for opinion holder identification in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus and Evaluation Tasks", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The opinion dictionary and named entity dictionaries are adopted in this paper. The opinion dictionary includes opinion operators, positive, neutral, and negative opinion words extracted from the NTCIR7 MOAT training set, and NTUSD (Ku & Chen, 2007) . It is utilized for feature extraction. The person name, location name, and organization name dictionaries are collected for named entity recognition here, including the Million person name dictionary, Sinica corpus 3 , CNA translated name dictionary, The Revised Chinese Dictionary 4 , Japanese common family name dictionary, MOE translated location name dictionary, translated foreign location name list, and Taiwan national industry list.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 249, |
|
"text": "(Ku & Chen, 2007)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Resources", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this experiment, the NTCIR-6 testing set and the NTCIR-7 training set were utilized for training, while the NTCIR-7 testing set was used for testing. NTCIR generates the gold standard under two metrics: the strict metric and the lenient metric. We selected opinion sentences by the lenient metric as the gold standard for testing, i.e. for these sentences, at least two out of three annotators label them as opinions. Precision, recall, f-score, and accuracy were adopted for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The experimental corpus was annotated by three annotators. Therefore, there are inconsistent annotations of opinion holders in some opinion sentences. For example, in the sentence \"Because Taiwan understands, using nuclear weapons will destroy the relationship with the U.S.\" one annotator labeled \"Taiwan\" as the holder, while the other selected \"the author\". The example sentence is an implicit nested opinion, and inconsistent annotations are found often in nested opinions. We checked all of the sentences to see if there are many inconsistently labeled opinion holders. The results are shown in Figure 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 600, |
|
"end": 608, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For the NTCIR-6 corpus, if the annotators could find any opinion holder, it was reported; if they could not, the opinion holder was automatically set to the author. Therefore, we cannot know how many opinion holders are inconsistently labeled (see the \"?%\" in Figure 5 ). Instead, for the NTCIR-7 corpus, the opinion holder \"the author\" was explicitly annotated. Therefore, we are able to find the percentage of inconsistently labeled opinion holders. Figure 5 shows that the opinion holder in 19% of sentences is consistently labeled as the author, while that in 15% of sentences is inconsistently labeled as the author and the other named entity. These two percentages are close to each other.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 268, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 460, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Author's Opinion Recognition", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "NTCIR 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NTCIR 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "From experiments, we have found that using opinion sentences as the training materials performs better than using all sentences and using both NTCIR-6 (all) and NTCIR-7 (testing) sentences for training also performs better than only using one of them. Therefore, in the experiments for author's opinion recognition, opinion sentences of NTCIR-6 and NTICR-7 were both used for training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 5. The percentages of the author's opinions in NTCIR-6 and NTCIR-7 corpora", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have discussed the inconsistency in the gold standard for the author's opinion recognition. Therefore, three settings are tested: treating inconsistently labeled opinions as the author's opinions, treating inconsistently labeled opinions as non-author's opinions, and expelling inconsistent labeled opinions in the training set. Table 3 shows the testing performances of these three settings. From Table 3 , we find that the first setting, treating inconsistently labeled opinions as the author's opinions, performs the best. It achieves an f-score of 79.98%, which outperforms the setting of treating them as non-author's opinions. The f-score, 65.0%, is even worse when the inconsistently labeled data is not considered, compared to the former two settings. Therefore, we conclude that inconsistently labeled data is useful and we should treat them as the author's opinions. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 339, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 408, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Figure 5. The percentages of the author's opinions in NTCIR-6 and NTCIR-7 corpora", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For opinion holder labeling, the NTCIR-7 training set is utilized. Here, we compare the performances of using strict opinion sentences (agreed on by three annotators) and lenient opinion sentences for training. After labeling words in sentences, the results of author's opinion recognition and opinion holder labeling are considered together to generate the proposed opinion holder. In this experiment, only opinion sentences correctly proposed by the system are evaluated, so the real performance of the opinion holder labeling can be calculated without the propagation errors from opinion extraction. As the number of sentences is the same as the number of opinion holders, the precision, recall, and f-score are not used as evaluation metrics because they will be equal. Instead, the number of correct holders and wrong holders, and the set f-score is adopted. The formula for calculating the set f-score is shown below. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "As mentioned in the previous section, CHAID and CRF are both tested for their performances in opinion holder labeling. Their performances are shown in Table 4 . Table 4 shows that CRF performs much better than CHAID in opinion holder labeling for both strict and lenient opinion sentences. In the setting CHAID+CRF, we first use CHAID to get the predicted labels, and use these labels together with other features as the input features for CRF. Results show that this setting can slightly improve the performance. One reason could be that the tagging set for CRF is larger than CHAID. Another reason could be that CRF has better performance in combining words with labels, while CHAID needs to apply rules on the proposed labels to find the complete opinion holder. The best performances achieved are a set f-score of 70.57% for strict opinion sentences and 67.83% for lenient opinion sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 158, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 168, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Furthermore, we classify labeling errors into six types according to the position of the proposed opinion holder and the correct opinion holder:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(1) The proposed opinion holder is not related to the correct holder in the aspect of position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "29.1% errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(2) The proposed opinion holder has one additional word in the front or rear, compared to the correct opinion holder. For example, \"\uf939\u65af\u66fc\u65e5\u524d\" (Russman the other day, proposed) and \"\uf939\u65af\u66fc\" (Russman, correct). 18.1% errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(3) The system only proposes the title of the opinion holder, but not the name. For example, \"\u79d1 \uf96a\u4f0f\u8457\u540d\uf96c\u88d4\uf9b4\u8896\" (The famous Serbian leader of Kosovo, proposed) and \"\u79d1\uf96a\u4f0f\u8457\u540d \uf96c\u88d4\uf9b4\u8896\u7279\uf925\u4f0a\u79d1\u7dad\uf909\" (The famous Serbian leader of Kosovo Trajkovic, correct). 8.3%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(4) The system only proposes the modifier of the correct opinion holder. For example, \"\u8a72\" (That, proposed) and \"\u8a72\u88c1\u6c7a\" (That decide, correct). 7.5% errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(5) The proposed opinion holder has two or more additional words compared to the correct opinion holder. For example, \"\u72c4\uf91f\u5728\u8a18\u8005\u6703\" (Dylan proposed in the press conference) and \"\u72c4\uf91f\" (Dylan, correct). 5.5% errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "(6) The proposed opinion holder is incomplete. 4.7% errors are of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "From these errors, we find that most errors occur because the system cannot determine the first word and the last word of the opinion holder properly. Therefore, different tagging sets were tested (IO, ISO, HTO, HISO, HITSO, etc.), and we have found that using the tagging set {H, I, O} can achieve the best performance, which is the set f-score of 70.57% for strict opinion sentences. We also propose the name entity recovery method to deal with errors of the sixth type. Experiments show that with co-training, the best confidence score threshold 0.7 and the named entity recovery, our system achieves the best set f-score of 72.03%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Opinion Holder Labeling", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "After author's opinion recognition and opinion holder labeling, we need to propose the opinion holder according to these results. Table 5 shows the performances of applying different result generation strategies. Table 5 shows that the performance of applying Result generation strategy B is better than applying Result generation strategy A. It indicates that the proposed method for author's opinion recognition works better in determining non-author's opinion. That is, we can be more sure when the system tells that the current opinion is not expressed by the author. Using fewer author relevant features may be the reason for this phenomenon.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 220, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments for Opinion Holder Identification", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "NTCIR-7 evaluates the system performance in two ways. One is to evaluate sentences correctly proposed by the system, which is also used in the previous section; the other is to evaluate all opinion sentences in the testing set. Table 6 shows the performances of all participants' systems together with the performance of our system. WIA's system performs the best in both evaluation metrics. WIA adopts heuristic rules to design their systems. Therefore, our system performs the best among all systems using machine learning methods. Moreover, our system performs better for strict opinion sentences, which is different from other systems. In other words, our system is good at identifying the holder of opinions that were consistently annotated by annotators. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 235, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performances of NTCIR-7 Participants", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "We proposed a machine learning based method for opinion holder identification. We classified this task into two subtasks: author's opinion recognition and opinion holder labeling. SVM was adopted for author's opinion recognition, and CRF was adopted for opinion holder labeling. We proposed lexical, syntactic, contextual, and opinion features. Named entities and punctuation marks were also utilized as features. We tested different tagging sets to find the best set {H, I, O}. Co-training was proposed to solve the problem of insufficient training materials, and results merging strategies were proposed to improve the performance. We also mentioned the methods of utilizing inconsistent annotated materials and analyzed system errors to find solutions for improving the performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The proposed system for the opinion holder identification achieved an f-score of 0.734, which is the best among machine learning based systems and is close to the state of the art. The state of the art system adopts heuristic rules. Nevertheless, heuristic rule based systems like it are difficult to rebuild because the rules are usually not described in detail in the previous literature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "In the future, we hope to solve the co-reference resolution problem, which is important in named entity extraction and also in opinion holder extraction. In addition, we plan to add parsing information to improve the performance. Finding a good named entity recovery algorithm is also one of our next attempts. In summary, utilizing techniques of opinion holder identification in the opinion analysis system to compare opinions of different people is our future goal.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The part of speech tagging set is listed in Technical Report no. 95-02/98-04, Chinese Knowledge Information Processing Group, Academia Sinica.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://research.nii.ac.jp/ntcir/ 3 http://dbo.sinica.edu.tw/ftms-bin/kiwi1/mkiwi.sh?language=1 4 http://dict.revised.moe.edu.tw/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Combining labeled and unlabeled data with co-training", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Blum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Conference on Computational Learning Theory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "92--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. Conference on Computational Learning Theory, 92-100.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Identifying expressions of opinion in context", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Breck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2683--2688", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Breck, E., Choi, Y., & Cardie, C. (2007). Identifying expressions of opinion in context. Proceedings of the 20th International Joint Conference on Artificial Intelligence, 2683-2688.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "LIBSVM: a library for support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chang, C. C., & Lin, C. J. (2001). LIBSVM: a library for support vector machines.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Identifying sources of opinions with conditional random fields and extraction patterns", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of EMNLP conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--362", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Choi, Y., Cardie, C., Riloff, E., & Patwardhan, S. (2005). Identifying sources of opinions with conditional random fields and extraction patterns. Proceedings of EMNLP conference, 355-362.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Determining the sentiment of opinions", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the COLING conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1367--1374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, S. M., & Hovy, E. (2004). Determining the sentiment of opinions. Proceedings of the COLING conference, 1367-1374.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Extracting opinions, opinion holders, and topics expressed in online news media text", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Sentiment and Subjectivity in Text at the joint COLING-ACL conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, S. M., & Hovy, E. (2006). Extracting opinions, opinion holders, and topics expressed in online news media text. Proceedings of the Workshop on Sentiment and Subjectivity in Text at the joint COLING-ACL conference, 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Identifying opinion holders in opinion text from online newspapers", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Myaeng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "International Conference on Granular Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "699--702", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, Y., Jung, Y., & Myaeng, S. H. (2007). Identifying opinion holders in opinion text from online newspapers. International Conference on Granular Computing, 699-702.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Extracting topic-related opinions and their targets in NTCIR-7", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Myaeng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "247--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim, Y., Kim, S., & Myaeng, S. H. (2008). Extracting topic-related opinions and their targets in NTCIR-7. Proceedings of the Seventh NTCIR Workshop, 247-254.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Mining opinions from the web: beyond relevance retrieval", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Ku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of American Society for Information Science and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1838--1850", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ku, L. W., & Chen, H. H. (2007). Mining opinions from the web: beyond relevance retrieval. Journal of American Society for Information Science and Technology, 1838-1850.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "CRF++: yet another CRF toolkit", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kudo, T. (2003). CRF++: yet another CRF toolkit. http://crfpp.sourceforge.net/.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lafferty, J., McCallum, A., & Pereira, F. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proceedings of International Conference on Machine Learning, 282-289.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "NLPR at Multilingual Opinion Analysis Task in NTCIR7", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "226--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liu, K., & Zhao, J. (2008). NLPR at Multilingual Opinion Analysis Task in NTCIR7. Proceedings of the Seventh NTCIR Workshop, 226-231.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An Approach of Using Multiple Dictionaries and Conditional Random Field in Chinese Segmentation and Part of Speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lo, Y. S. (2008). An Approach of Using Multiple Dictionaries and Conditional Random Field in Chinese Segmentation and Part of Speech Tagging. Master Thesis, National Taiwan University.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Detecting opinionated sentences by extracting context information", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "268--271", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meng, X., & Wang, H. (2008). Detecting opinionated sentences by extracting context information. Proceedings of the Seventh NTCIR Workshop, 268-271.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "YALE: rapid prototyping for complex data mining tasks", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Mierswa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Wurst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Klinkenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Scholz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Euler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "935--940", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., & Euler, T. (2006). YALE: rapid prototyping for complex data mining tasks. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 935-940.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2), 1-135.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Overview of multilingual opinion analysis task at NTCIR-7", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Seki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Ku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kando", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "185--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seki, Y., Evans, D. K., Ku, L. W., Sun, L., Chen, H. H., & Kando, N. (2008). Overview of multilingual opinion analysis task at NTCIR-7. Proceedings of the Seventh NTCIR Workshop, 185-203.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Multilingual opinion holder identification using author and authority viewpoints", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Seki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Aono", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Information Processing and Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "189--199", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seki, Y., Kando, N., & Aono, M. (2009). Multilingual opinion holder identification using author and authority viewpoints. Journal of Information Processing and Management, 189-199.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Tornado in multilingual opinion analysis: a transductive learning approach for Chinese sentimental polarity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "301--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wu, Y. C., Yang, L. W., Shen, J. Y., Chen, L. Y., & Wu, S. T. (2008). Tornado in multilingual opinion analysis: a transductive learning approach for Chinese sentimental polarity recognition. Proceedings of the Seventh NTCIR Workshop, 301-306.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Coarse-Fine opinion mining -WIA in NTCIR-7 MOAT task", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh NTCIR Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "307--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu, R., & Wong, K. F. (2008). Coarse-Fine opinion mining -WIA in NTCIR-7 MOAT task. Proceedings of the Seventh NTCIR Workshop, 307-313.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Figure 2. Co-training process", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "of occurrence of OP equals that of the string {w i-1 ,OP}, i--; while j n < if the number of occurrence of OP equals that of the string {OP,w j+1 }, j++;", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "Result generation strategy A Result generation strategy B", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>Feature Type</td><td>Feature Name</td><td>Feature Description</td></tr><tr><td/><td>FHasI</td><td>Does the word \"I\" (\u6211) appear?</td></tr><tr><td>Lexicon</td><td>FHasWe fNumI</td><td>Does the word \"we\" (\u6211\u5011) appear? The number of the word \"I\" (\u6211)</td></tr><tr><td/><td>fNumWe</td><td>The number of the word \"we\" (\u6211\u5011)</td></tr><tr><td/><td>fHasPronoun</td><td>Are there any pronouns?</td></tr><tr><td>Part of speech</td><td>fHasManPronoun fNumPronoun</td><td>Are there any personal pronouns? The number of pronouns</td></tr><tr><td/><td>fNumManPronoun</td><td>The number of personal pronouns</td></tr><tr><td/><td>fHasPer</td><td>Is there a person name (named entity)?</td></tr><tr><td/><td>fHasLoc</td><td>Is there a location name (named entity)?</td></tr><tr><td/><td>fHasOrg</td><td>Is there an organization name (named entity)?</td></tr><tr><td/><td>fHasNa</td><td>Are there any common nouns?</td></tr><tr><td/><td>fHasNb</td><td>Are there any proper nouns?</td></tr><tr><td>Named entity</td><td>fHasNc fNumLoc</td><td>Are there any common location nouns? The number of location names (named entity)</td></tr><tr><td/><td>fNumOrg</td><td>The number of organization names (named entity)</td></tr><tr><td/><td>fNumPer</td><td>The number of personal names (named entity)</td></tr><tr><td/><td>fNumNa</td><td>The number of common names</td></tr><tr><td/><td>fNumNb</td><td>The number of proper names</td></tr><tr><td/><td>fNumNc</td><td>The number of common location names</td></tr><tr><td/><td>fHasExclamation</td><td/></tr><tr><td>Punctuation</td><td/><td/></tr><tr><td>mark</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "POS) of the nearest verb 1 , e.g., VA (transitive verb), VB (intransitive verb), etc.", |
|
"content": "<table><tr><td>Feature Type</td><td>Feature Name</td><td>Feature Description</td></tr><tr><td>Lexicon</td><td>fWord</td><td>The current Word</td></tr><tr><td/><td>fPOS</td><td>Part of speech of the current word</td></tr><tr><td>Part of speech</td><td>fIsPronoun</td><td>Is the current word a pronoun?</td></tr><tr><td/><td>fIsNoun</td><td>Is the current word a noun?</td></tr><tr><td/><td>fIsPer</td><td>Is the current word a person name?</td></tr><tr><td>Named entity</td><td>fIsLoc</td><td>Is the current word a location name?</td></tr><tr><td/><td>fIsOrg</td><td>Is the current word an organization name?</td></tr><tr><td>Punctuation</td><td>fAfterParen</td><td>Does the current word appear one word after a parenthesis \"\u300d\" or \"\u3011\"?</td></tr><tr><td>mark</td><td>fBeforeColon</td><td>Does the current word appear one word before a colon \"\uff1a\" or \":\"?</td></tr><tr><td/><td>fNearSenStart</td><td>Is the current word close to the sentence head?</td></tr><tr><td/><td>fSenLen</td><td>The number of words in the current sentence</td></tr><tr><td>Sentential</td><td>fWordOrder</td><td>The absolute position of the current word in the sentence</td></tr><tr><td/><td>fWordPerc</td><td>The absolute position (in percentage) of the current word in the sentence</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>Settings</td><td>Precision</td><td>Recall</td><td>f-score</td><td>Accuracy</td></tr><tr><td>Author's opinions</td><td>69.68%</td><td>93.85%</td><td>79.98%</td><td>83.49%</td></tr><tr><td>Non-author's opinions</td><td>64.87%</td><td>95.94%</td><td>77.40%</td><td>80.31%</td></tr><tr><td>Expelling inconsistency</td><td>50.52%</td><td>91.53%</td><td>65.10%</td><td>77.28%</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>Method</td><td>Correct #</td><td>Error #</td><td>Set f-score</td></tr><tr><td/><td>CHAID</td><td>564</td><td>605</td><td>48.16%</td></tr><tr><td>Strict</td><td>CRF</td><td>818</td><td>351</td><td>69.89%</td></tr><tr><td/><td>CHAID+CRF</td><td>825</td><td>344</td><td>70.57%</td></tr><tr><td/><td>CHAID</td><td>981</td><td>967</td><td>50.31%</td></tr><tr><td>Lenient</td><td>CRF</td><td>1317</td><td>631</td><td>67.57%</td></tr><tr><td/><td>CHAID+CRF</td><td>1322</td><td>626</td><td>67.83%</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"5\">. The performance of opinion holder identification:</td></tr><tr><td/><td colspan=\"4\">applying different result generation strategies</td></tr><tr><td/><td>Strategy</td><td>Correct #</td><td>Wrong #</td><td>Set f-score</td></tr><tr><td>strict</td><td>A B</td><td>829 858</td><td>340 310</td><td>70.92% 73.40%</td></tr><tr><td>lenient</td><td>A B</td><td>1338 1372</td><td>611 576</td><td>68.65% 70.40%</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>Participants</td><td>Number of correctly proposed opinion sentences</td><td>Evaluate correctly proposed opinion sentences Set f-Score</td><td colspan=\"3\">Evaluate all opinion sentences Precision Recall f-score</td></tr><tr><td/><td>WIA</td><td>757</td><td>82.30%</td><td>19.88%</td><td>49.52%</td><td>28.38%</td></tr><tr><td/><td>iclpku-1</td><td>880</td><td>57.84%</td><td>13.03%</td><td>40.53%</td><td>19.72%</td></tr><tr><td/><td>iclpku -2</td><td>989</td><td>58.04%</td><td>10.35%</td><td>45.70%</td><td>16.88%</td></tr><tr><td>Strict</td><td>TTRD-1</td><td>1213</td><td>54.91%</td><td>8.22%</td><td>52.95%</td><td>14.23%</td></tr><tr><td/><td>TTRD-2</td><td>866</td><td>58.31%</td><td>9.72%</td><td>40.13%</td><td>15.65%</td></tr><tr><td/><td>NTU-1</td><td>1169</td><td>48.16%</td><td>8.14%</td><td>44.90%</td><td>13.78%</td></tr><tr><td/><td>Our System</td><td>1169</td><td>73.40%</td><td>12.38%</td><td>68.31%</td><td>20.97%</td></tr><tr><td/><td>WIA</td><td>1134</td><td>82.54%</td><td>29.92%</td><td>43.05%</td><td>35.31%</td></tr><tr><td/><td>iclpku-1</td><td>1364</td><td>58.72%</td><td>20.51%</td><td>36.84%</td><td>26.35%</td></tr><tr><td/><td>iclpku -2</td><td>1606</td><td>59.90%</td><td>17.33%</td><td>44.20%</td><td>24.90%</td></tr><tr><td>Lenient</td><td>TTRD-1</td><td>2070</td><td>56.47%</td><td>16.78%</td><td>40.02%</td><td>23.65%</td></tr><tr><td/><td>TTRD-2</td><td>1464</td><td>59.49%</td><td>14.43%</td><td>53.73%</td><td>22.75%</td></tr><tr><td/><td>NTU-1</td><td>1948</td><td>50.31%</td><td>14.43%</td><td>53.73%</td><td>22.75%</td></tr><tr><td/><td>Our System</td><td>1948</td><td>70.40%</td><td>19.80%</td><td>63.11%</td><td>30.15%</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |