{ "paper_id": "I13-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:14:53.030109Z" }, "title": "Feature Selection Using a Semantic Hierarchy for Event Recognition and Type Classification", "authors": [ { "first": "Yoonjae", "middle": [], "last": "Jeong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea Advanced Institute of Science and Technology (KAIST)", "location": { "addrLine": "291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu", "postCode": "305-701", "settlement": "Daejeon", "country": "Republic of Korea" } }, "email": "" }, { "first": "Sung-Hyon", "middle": [], "last": "Myaeng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Korea Advanced Institute of Science and Technology (KAIST)", "location": { "addrLine": "291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu", "postCode": "305-701", "settlement": "Daejeon", "country": "Republic of Korea" } }, "email": "myaeng@kaist.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Event recognition and event type classification are among the important areas in text mining. A state-of-the-art approach utilizing deep-level lexical semantics and syntactic dependencies suffers from a limitation of requiring too large feature space. In this paper, we propose a novel feature selection method using a semantic hierarchy of features based on WordNet relations and syntactic dependencies. Compared to the well-known feature selection methods, our proposed method reduces the feature space significantly while keeping the same level of effectiveness. For noun events, it improves effectiveness as well as efficiency. Moreover, we expect the proposed feature selection can be applied to the other types of text classification using hierarchically organized semantic resources such as WordNet.", "pdf_parse": { "paper_id": "I13-1016", "_pdf_hash": "", "abstract": [ { "text": "Event recognition and event type classification are among the important areas in text mining. A state-of-the-art approach utilizing deep-level lexical semantics and syntactic dependencies suffers from a limitation of requiring too large feature space. In this paper, we propose a novel feature selection method using a semantic hierarchy of features based on WordNet relations and syntactic dependencies. Compared to the well-known feature selection methods, our proposed method reduces the feature space significantly while keeping the same level of effectiveness. For noun events, it improves effectiveness as well as efficiency. Moreover, we expect the proposed feature selection can be applied to the other types of text classification using hierarchically organized semantic resources such as WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Feature selection is an important issue in textbased classification because features can be generated in a number of different ways from text. Selecting features affects not only efficiency when the space is big but also classification effectiveness by eliminating noise features (Manning, Raghavan, & Sch\u00fctze, 2008) . In this paper, we propose a new feature selection method that utilizes semantic aspects of word features and discuss its relative merits compared to other well-known feature selection methods.", "cite_spans": [ { "start": 280, "end": 316, "text": "(Manning, Raghavan, & Sch\u00fctze, 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among many text-based classification problems, this research focuses on event recognition (a kind of binary classification) and type classification that have been studied extensively to improve performance of applications such as automatic summarization (Daniel, Radev, & Allison, 2003) and question answering (Pustejovsky, 2002) . For event recognition and type classification, TimeML has served as a representative annotation scheme of events (Pustejovsky, Casta\u00f1o, et al., 2003) , which are defined as situations that happen or occur and expressed by verbs, nominalizations, adjectives, predicative clauses or prepositional phrases. TimeML defines seven types of events, REPORTING, PERCEPTION, ASPECTUAL, I_ACTION, I_STATE, STATE, and OCCURRENCE (Pustejovsky, Knippen, Littman, & Saur\u00ed, 2007) , to which a recognized event text is classified for event type classification.", "cite_spans": [ { "start": 254, "end": 286, "text": "(Daniel, Radev, & Allison, 2003)", "ref_id": "BIBREF2" }, { "start": 310, "end": 329, "text": "(Pustejovsky, 2002)", "ref_id": "BIBREF9" }, { "start": 445, "end": 481, "text": "(Pustejovsky, Casta\u00f1o, et al., 2003)", "ref_id": "BIBREF10" }, { "start": 749, "end": 795, "text": "(Pustejovsky, Knippen, Littman, & Saur\u00ed, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different approaches to recognize and classify TimeML events have been proposed, ranging from rule-based approaches (Saur\u00ed, Knippen, Verhagen, & Pustejovsky, 2005) to supervised machine learning techniques based on lexical semantic classes and morpho-syntactic information around events (Bethard & Martin, 2006; Boguraev & Ando, 2007; Jeong & Myaeng, 2013; Llorens, Saquete, & Navarro-Colorado, 2010) . Jeong & Myaeng (2013) recently showed that using the deeper-level of semantics increased the performance. They obtained the best performance in their classification experiments when lexical semantic features using hypernyms at the maximum depth of eight in WordNet were used for the event candidates and the words having syntactic dependency. While the approach showed a meaningful improvement, it has a problem of generating too many features.", "cite_spans": [ { "start": 116, "end": 163, "text": "(Saur\u00ed, Knippen, Verhagen, & Pustejovsky, 2005)", "ref_id": "BIBREF13" }, { "start": 287, "end": 311, "text": "(Bethard & Martin, 2006;", "ref_id": "BIBREF0" }, { "start": 312, "end": 334, "text": "Boguraev & Ando, 2007;", "ref_id": "BIBREF1" }, { "start": 335, "end": 356, "text": "Jeong & Myaeng, 2013;", "ref_id": "BIBREF3" }, { "start": 357, "end": 400, "text": "Llorens, Saquete, & Navarro-Colorado, 2010)", "ref_id": "BIBREF5" }, { "start": 403, "end": 424, "text": "Jeong & Myaeng (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Semantic features that can be mapped to a structure like WordNet have hierarchical relationships. In this situation, when two features have a hypernym-hyponym relationship, the higher-level feature encompasses the lower-level one (see Figure 1 -(a)). If a conventional feature selection method were used, therefore, the selected features would include both overly specific, low-level features and more general ancestors that cover the characteristics of the children (see Figure 1 -(b)). When the general features are accurate and specific enough to represent the class, their descendants are unnecessary and redundant. When redundant features of similar kind are used, they cause not only efficiency problems but also potential overfitting of the model because the resulting model may become biased towards the semantics covered by the sub-tree containing the features. It is important to select the features that are sufficiently general to encompass more specific features found in the training data but specific enough to utilize deep-level semantics available in the hierarchy (see Figure 1 -(c)). The leftmost feature in (c) covers the semantics of the two features under it without having to keep them. Choosing the feature in the center and the rightmost feature has a similar effect and at the same time avoids using the overly general feature that encompasses both as well as the sibling of the rightmost one, which is not an appropriate one. In other words, we should select as general a feature as possible as long as none of them are considered irrelevant for the class, thereby it can cover the semantics of the features underneath it, without which we can achieve better efficiency.", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 243, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 472, "end": 480, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1087, "end": 1095, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In short, we propose a method for solving the problem of using features that are semantically redundant. Assuming that all the features can be organized in the form of a hierarchy, the method attempts to select the features that are as specific as possible as long as there are no semantically redundant features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first describe the task for recognition and type classification of TimeML events. For word-based event recognition and type classification, we converted the phrase-based annotations into a form with BIO-tags. For each word in a document, we assign a label indicating whether it is inside or out-side of an event (i.e., BIO2 1 label) as well as its type. For type classification, in addition, each word must be classified into one of the known event classes. Figure 2 illustrates an example of chunking and labeling components of an event in a sentence.", "cite_spans": [], "ref_spans": [ { "start": 461, "end": 469, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Event Recognition and Type Classification Task", "sec_num": "2" }, { "text": "All O O 75 O O people O O on B-EVENT B-STATE board I-EVENT I-STATE the O O Aeroflot O O Airbus O O died B-EVENT B- OCCURRENCE .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Event Label Event Type Label", "sec_num": null }, { "text": "O O Figure 2 . Event chunking for a sentence, \"All 75 people on board the Aeroflot Airbus died.\" B-EVENT, I-EVENT and O refer to the beginning, inside and outside of an event.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Word Event Label Event Type Label", "sec_num": null }, { "text": "Our method consists of three parts: preprocessing, feature extraction and selection, and classification. The preprocessing part analyzes raw text for tokenization, PoS tagging, and syntactic parsing (dependency parsing). It is done by the Stanford CoreNLP package 2 , which is a suite of natural language processing tools. Then, the feature extraction part converts the preprocessed data into the feature space, followed by feature selection. Finally, the classification part determines whether the given word is an event or not and its type using a maximum entropy (ME) classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Event Label Event Type Label", "sec_num": null }, { "text": "Because the goal of the proposed method is to automatically select the most valuable features, we generate feature sets based on the same criteria of Jeong & Myaeng's work (2013) , which showed better performance for TimeML event than the state-of-the-art approach. The details are below:", "cite_spans": [ { "start": 150, "end": 178, "text": "Jeong & Myaeng's work (2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Feature Candidate Generation", "sec_num": "3" }, { "text": "Lexical Semantic Features (LSF). The set of target words' lemmas and their all-depth Word-Net semantic classes (i.e., hypernyms). For example, a noun \"drop\" that is mapped to such a WordNet class is always an event regardless of its context in a sentence in the TimeBank corpus (Pustejovsky, Hanks, et al., 2003) .", "cite_spans": [ { "start": 278, "end": 312, "text": "(Pustejovsky, Hanks, et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Candidate Generation", "sec_num": "3" }, { "text": "Windows Features (WF). The lemma, hypernyms, and PoS of the context defined by a fiveword window [-2, +2] around a target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Candidate Generation", "sec_num": "3" }, { "text": ". They are similar with WF, but the context is defined by syntactic dependencies. This feature type differs from WF because the context may go beyond the fixed size window and the features are not just words. Increasing the window size for WF instead of using this feature type is not an option because it would end up including some noise by including too big a context. Four dependencies we consider are: subject (SUBJ), object (OBJ), complement (COMP), and modifier (MOD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-based Features (DF)", "sec_num": null }, { "text": "\uf0b7 SUBJ type. A feature is formed with the governor or dependent word and its hypernyms that has the SUBJect relation (nsubj and nsubjpass) with the target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-based Features (DF)", "sec_num": null }, { "text": "\uf0b7 OBJ type. It is the governor or dependent word and its hypernyms, which has the OBJect relation (dobj, iobj, and pobj) with the target word. In \"\u2026 delayed the game \u2026\", for instance, the verb \"delay\" can describe the temporal state of its object noun, \"game\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-based Features (DF)", "sec_num": null }, { "text": "\uf0b7 COMP type. It indicates the governor or dependent word and its hypernyms, which has the COMPlement relation (acomp and xcomp) with the target word. In \"\u2026 called President Bush a liar \u2026\", for example, the verb \"called\" makes the state of its object (\"Bush\") into the complement noun, \"liar\". In this case, the word \"liar\" becomes a STATE event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-based Features (DF)", "sec_num": null }, { "text": "\uf0b7 MOD type. It refers to the dependent words and their hypernyms in MODifier relation (amod, advmod, partmod, tmod and so on). This feature type is based on the intuition that some modifiers such as temporal expression reveal the word it modifies has a temporal state and therefore is likely to be an event.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency-based Features (DF)", "sec_num": null }, { "text": "They are a combination of LSF and DF (or WF). A certain DF may not be an absolute clue for an event by itself but only when it co-occurs with a certain lexical or semantic aspect of the target word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combined Features (CF).", "sec_num": null }, { "text": "Since a large number of features are generated with the aforementioned feature generation method, it is necessary to filter out those whose roles in classification are minimal. We first remove the feature candidates whose frequency in the training data is less than two. If a target word containing the feature candidate is determined not to be an event more than 50% in the training data, it is also eliminated. The remaining feature candidates are then organized into a meaning hierarchy so that we can apply the tree-based feature selection method. An entailment relationship between two features, fi >> fj, is established by a hypernym/hyponym relationship, syntactic dependency, or occurrence sequence as in Table 1 . A and D represent an ancestor and a descendent in a feature hierarchy tree with A >> D. We call the LSF and DF (or WF) features in CF as target and context elements, respectively. LSF can be an ancestor of CF because LSF does not consider the surrounding context of a target word whereas CF includes the context. CFLD and CFLW mean CF of LSF and DF and CF of LSF and WF, respectively. ", "cite_spans": [], "ref_spans": [ { "start": 713, "end": 720, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Feature Selection Based on Semantic Hierarchy", "sec_num": "4" }, { "text": "Given that the entailment relationship >> can be established between two features, we can construct a feature tree that becomes a basis for treebased feature selection. We begin with a tree that only has a root node R, a meta-feature that is the ancestor of all features. R entails and keeps adding new features to the tree until all the features are added to the tree. We define a, d, and c for ancestor, descendent, and child features with the relationships a >> d and a > c where > means c is a child of a, restricting that there is no node between a and c with a >> c. Figure 3 illustrates the detail algorithm of feature tree generation.", "cite_spans": [], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Feature Tree Generation", "sec_num": "4.1" }, { "text": "When a new feature f is added to the (sub-)tree whose root is a and a >> f, f either becomes a child of a or is added to one of the sub-trees of a (line 9~28). If there is c such that c >> f, f is added to a subtree whose root is c (line 14~17). On the other hand, if f >> c, f replaces c, and c is entered to the sub-tree whose root is f (line 19~25) . Finally if f has no entailment relation with any of the children nodes of a, f is added as a child of a (line 26~27). ", "cite_spans": [ { "start": 339, "end": 351, "text": "(line 19~25)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Feature Tree Generation", "sec_num": "4.1" }, { "text": "The key idea of the selection algorithm we devised is to evaluate each of the paths in the tree and select the appropriate node (i.e. feature). A path is defined to be the list of nodes between the root and a leaf node. In essence, the problem of selecting nodes or features from a tree is converted into smaller problems of selecting a node from individual paths. The process is illustrated with Figure 4 where each node of the tree except the root represents a feature. The tree has n paths corresponding to the number of leaf nodes. The algorithm selects the most representative node on a path, which is marked with a black node in To select the most representative feature on a path, we employed the notion of lift, which has been used widely in the area of association rule mining to compute the degree to which two items are associated (Tuff\u00e9ry, 2011) . More specifically, it is defined as Equation 1where P( f ) indicate the probability of a feature f in training data set. P( E | f ) is the conditional probability of events occurring given that f occurs.", "cite_spans": [ { "start": 842, "end": 857, "text": "(Tuff\u00e9ry, 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 397, "end": 405, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Tree-Based Feature Selection", "sec_num": "4.2" }, { "text": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 P E f lift f Pf \uf03d (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-Based Feature Selection", "sec_num": "4.2" }, { "text": "While general feature selection methods such as \u03c7 2 are based on the degree of belief, our selection method considers the reliability and applicability (or generality) of a feature. In other words, a feature we choose should have a high lift value (i.e., high reliability) and lie closest to the root on a path so that we can broaden its applicability. These criteria would be particularly true when the amount of training data is not sufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-Based Feature Selection", "sec_num": "4.2" }, { "text": "However, selecting the feature at the highest level in the tree may not be the best choice. In Figure 4 , for example, even if the node Fi in grey is determined to be the most representative one for the path 1, it may not be the best one. In this case, Fj may be a better one because it happens to be the representative node for the path between Fi and L1. However, there is a chance that the sub-tree of Fi may have important features (i.e., L3, L5) that end up elevating Fi's weight unfairly. Instead of Fi, using Fj would be a better choice.", "cite_spans": [], "ref_spans": [ { "start": 95, "end": 103, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Tree-Based Feature Selection", "sec_num": "4.2" }, { "text": "In order to handle this problem, we developed an algorithm where the key idea works as in Figure 5 . We first collect all the representative features from the paths based on the reliability and generality criteria mentioned above (line 29~45). For each representative node, we check if any of the descendant nodes have been selected as a representative node of other paths (line 21). If the condition is met, the node is no longer considered as a representative node (line 23). The same process is applied to the sub-tree whose root is the node just deleted from the set of representative nodes (line 25). Up to now, this process does not require manually checking the performance for the selected features. We select the final features among those obtained through the above process by employing a widely used feature selection method (in our case, \u03c7 2 ). It is because the most representative feature in a path might not be effective one in the entire feature space.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 98, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Tree-Based Feature Selection", "sec_num": "4.2" }, { "text": "The main goal of the experiment is to examine the efficacy of the proposed tree-based feature selection method in the context of event recognition and event type classification. For test collection, we use the TimeBank 1.2 corpus (Pustejovsky, Hanks, et al., 2003) , which is the most recent version of TimeBank, annotated with the TimeML 1.2.1 specification. It contains 183 news articles and more than 61,000 nonpunctuation tokens, among which 7,935 represent events.", "cite_spans": [ { "start": 230, "end": 264, "text": "(Pustejovsky, Hanks, et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We analyzed the corpus to investigate on the distribution of PoS (Part of Speech) for the tokens annotated as events. Most events are expressed in verbs and nouns. Sum of the two PoS types covers about 93% of all the event tokens, which is split into about 65% and 28% for verb and nouns, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "The experiment is designed to see the effect of the selection method by using the feature candidates generated by the work of Jeong & Myaeng (2013) , which showed the best performance in TimeML event recognition and classification in the literature. It generates feature sets based on the same criteria of the proposed method using syntactic dependencies and WordNet hypernyms. To find the concept (i.e., synset) of a target word, we applied the word sense disambiguation module of BabelNet (Ponzetto & Navigli, 2010) . We also used Stanford Parser (Klein & Manning, 2003) to get the syntactic dependency based features.", "cite_spans": [ { "start": 126, "end": 147, "text": "Jeong & Myaeng (2013)", "ref_id": "BIBREF3" }, { "start": 491, "end": 517, "text": "(Ponzetto & Navigli, 2010)", "ref_id": "BIBREF8" }, { "start": 549, "end": 572, "text": "(Klein & Manning, 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "A maximum entropy (ME) classifier was used because it showed the best performance for the tasks at hand, according to the literature. We also considered SVM, another popular machine learning algorithm in natural language processing. The evaluation was done by 5-fold cross validation, and the data of each fold was randomly selected. For the classifier, we used the Mallet machine learning package (McCallum, 2002) and Weka (Witten, Frank, & Hall, 2011) .", "cite_spans": [ { "start": 398, "end": 414, "text": "(McCallum, 2002)", "ref_id": "BIBREF7" }, { "start": 424, "end": 453, "text": "(Witten, Frank, & Hall, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We first evaluated the proposed tree-based feature selection in comparison with two widely accepted feature selection methods: information gain (IG) and \u03c7 2 . For each feature selection method, we chose the number of features that gave the best performance in F1. In Table 2 , TSEL means the pure tree-based feature selection without the reselection process using \u03c7 2 whereas TSEL+\u03c7 2 means the proposed method followed by \u03c7 2 .", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 274, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5.2" }, { "text": "Compared to \u03c7 23 , TSEL dramatically reduced the feature space significantly by 73.93% and 54.42% for event recognition and type classification, respectively, but the decrease of effectiveness was insignificant for the both tasks. The decrease was compensated by the reselection process (hence the TSEL+\u03c7 2 case) to the point of 1.26% improvement over the \u03c7 2 case. For type classification, only 40.68% of the features required by \u03c7 2 were enough to achieve the same level of effectiveness achieved \u03c7 2 . Due to the decrease of feature space, the running times of classification tasks (except preprocessing) were also quite reduced. The time-savings by TSEL were about 40% and 45% of \u03c7 2 in the recognition and the type classification. We use \u03c7 2 for discussion instead of IG because it showed the better performance than IG for the verb and noun event classification, which is the main focus of the research. Table 3 . Comparisons in effectiveness for event recognition and type classification using SVM classifier Looking at the performance of different PoS types, we found that the performance of noun events was more meaningfully improved with a significantly reduced feature set. With the feature set reduction ratios of 81.66% and 81.50% for recognition and type classification, respectively, we achieved 6.85% and 3.94% of increase in F1 4 . For verbs, the numbers of features used for class recognition were also reduced significantly, but the F1 scores were slightly decreased. Our analysis shows that the increase in effectiveness for nouns is mainly attributed to the fact that the synsets of most nouns are located at a deep level of WordNet hierarchy. On the contrary, the hierarchy for verbs is not as deep as that of nouns. Note that the tree-based selection method is most helpful when heavy redundancy of features with a deep hierarchy causes a problem. Table 4 . Feature space sizes and effectiveness values for noun and verb events in event recognition and type classification Figure 6 and 7 show the performance changes incurred by reducing the feature sets for different feature selection methods. The lines start from the point where all the selected features were used in each method and continue with a decrement of 10% of the feature set all the way to the minimum of 10% of the originally selected feature set. The starting points of TSEL+\u03c7 2 indicate the results of pure TSEL. Despite the elimination of many features, the pure TSEL does not much harm the F1 compared to the best cases of IG and X2. It clearly shows that reducing the size of feature sets is less detrimental with the proposed method in almost all the cases than the other selection methods. TSEL also shows the possibility to select valuable features without manual check of performance for the feature space size. For event type classification, the manual selection process (TSEL+\u03c7 2 ) is still needed in order to find the best features but it guarantees the more effectiveness. Table 5 shows detailed scores for all the event types separately. An improvement is observed for most of the event types except for OCCURRENCE. Our analysis shows that this is related to the size of the training data. Since the ratio of OCCURRENCE events is about 53% of all the events in the TimeBank corpus, the training data for the OCCURRENCE type is much bigger than the others. It indicates that the feature redundancy is problematic when the training data is relatively small and that careful selection of features is particularly important to avoid overfitting. Table 5 . Performance for different event types (unit: F1). * indicates that the percent increase or decrease is statistically significant with p < 0.05.", "cite_spans": [], "ref_spans": [ { "start": 910, "end": 917, "text": "Table 3", "ref_id": null }, { "start": 1871, "end": 1878, "text": "Table 4", "ref_id": null }, { "start": 1996, "end": 2004, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 2975, "end": 2982, "text": "Table 5", "ref_id": null }, { "start": 3545, "end": 3552, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5.2" }, { "text": "EVITA (Saur\u00ed et al., 2005) is the first event recognition tool for TimeML specification. It recognizes events by using both linguistic and statistical techniques. It uses manually encoded rules based on linguistic information as main features to recognize events. It also uses World-Net classes to those rules for nominal event recognition, and checks whether the head word of noun phrase is included in the WordNet event classes. For sense disambiguation of nouns, it utilizes a Bayesian classifier trained on the Sem-Cor corpus. Boguraev & Ando (2007) analyzed the Time-Bank corpus and presented a machine-learning based approach for automatic TimeML events annotation. They set out the task as a classification problem, and used a robust risk minimization (RRM) classifier to solve it. They used lexical and morphological attributes and syntactic chunk types in bi-and tri-gram windows as features. Bethard & Martin (2006) developed a system, STEP, for TimeML event recognition and type classification. They adopted syntactic and semantic features, and formulated the event recognition task as classification in the word-chunking paradigm. They used a rich set of features: textual, morphological, syntactic dependency and some selected WordNet classes. They implemented a Support Vector Machine (SVM) model based on those features. Llorens et al. (2010) presented an evaluation on event recognition and type classification. They added semantic roles to features, and built the Conditional Random Field (CRF) model to recognize events. They conducted experiments about the contribution of semantic roles and CRF and reported that the CRF model improved the performance but the effects of semantic role features were not significant. Jeong & Myaeng (2013) argued and demonstrated that unit feature dependency information and deep-level WordNet hypernyms are useful for event recognition and type classification. Their proposed method utilizes various features including lexical se-mantic and dependencybased combined features. In the TimeBank 1.2 corpus, the approach achieved 0.8601 and 0.7058 in F1 in event recognition and type classification, respectively.", "cite_spans": [ { "start": 6, "end": 26, "text": "(Saur\u00ed et al., 2005)", "ref_id": "BIBREF13" }, { "start": 531, "end": 553, "text": "Boguraev & Ando (2007)", "ref_id": "BIBREF1" }, { "start": 902, "end": 925, "text": "Bethard & Martin (2006)", "ref_id": "BIBREF0" }, { "start": 1336, "end": 1357, "text": "Llorens et al. (2010)", "ref_id": "BIBREF5" }, { "start": 1736, "end": 1757, "text": "Jeong & Myaeng (2013)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we proposed a novel feature selection method for event recognition and event type classification, which utilizes a semantic hierarchy of features. While our current work is based on the WordNet hierarchy and syntactic dpendencies, the proposed method can be applied as long as it is possible to utilize a feature hierarchy, and shows the possibility to select valuable features without manual check of performance for the feature space size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our experimental results show that the proposed method is significantly effective in reducing the feature space compared to the wellknown feature selection methods, and yet the overall effectiveness is similar to or sometimes better than a state-of-the-art approach depending on the PoS of the events. In particular, the effectiveness for noun events was improved quite meaningfully when the feature space was reduced significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although the proposed method showed the encouraging results, it still has some limitations. One issue is on the depth of the features in hierarchy. For verb, most features are located at shallow levels so the feature space reduction ratio is lower than those of noun. It implies that we need other approaches for verbs. Another one is on the recall. The proposed method showed high precision but relative lower recall. We conjecture that one reason is the lack of lexical information due to small size of TimeBank corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Not only to improve recall but also for extensibility of the proposed method, we need to utilize other larger-scale resources for this tasks and even apply the proposed method for other types of text classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "IOB2 format: (B)egin, (I)nside, and (O)utside 2 Stanford CoreNLP, http://nlp.stanford.edu/software/corenlp.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The results are statistically significant with p < 0.05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Identification of event mentions and their semantic class", "authors": [ { "first": "S", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "146--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bethard, S., & Martin, J. H. (2006). Identification of event mentions and their semantic class. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (pp. 146-154). Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Effective Use of TimeBank for TimeML Analysis", "authors": [ { "first": "B", "middle": [], "last": "Boguraev", "suffix": "" }, { "first": "R", "middle": [], "last": "Ando", "suffix": "" } ], "year": 2007, "venue": "Annotating, Extracting and Reasoning about Time and Events", "volume": "4795", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boguraev, B., & Ando, R. (2007). Effective Use of TimeBank for TimeML Analysis. In F. Schilder, G. Katz, & J. Pustejovsky (Eds.), Annotating, Extracting and Reasoning about Time and Events (Vol. 4795, pp. 41-58). Springer Berlin Heidelberg.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Subevent based multi-document summarization", "authors": [ { "first": "N", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "D", "middle": [], "last": "Radev", "suffix": "" }, { "first": "T", "middle": [], "last": "Allison", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the HLT-NAACL 03 on Text summarization workshop", "volume": "5", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel, N., Radev, D., & Allison, T. (2003). Sub- event based multi-document summarization. In Proceedings of the HLT-NAACL 03 on Text summarization workshop (Vol. 5, pp. 9-16). Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using WordNet Hypernyms and Dependency Features for Phrasallevel Event Recognition and Type Classification", "authors": [ { "first": "Y", "middle": [], "last": "Jeong", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Myaeng", "suffix": "" } ], "year": 2013, "venue": "Advances in Information Retrieval", "volume": "7814", "issue": "", "pages": "267--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeong, Y., & Myaeng, S.-H. (2013). Using WordNet Hypernyms and Dependency Features for Phrasal- level Event Recognition and Type Classification. In P. Serdyukov, P. Braslavski, S. Kuznetsov, J. Kamps, S. R\u00fcger, E. Agichtein, \u2026 E. Yilmaz (Eds.), Advances in Information Retrieval (Vol. 7814, pp. 267-278). Springer Berlin Heidelberg.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, D., & Manning, C. D. (2003). Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (pp. 423-430). Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TimeML events recognition and classification: learning CRF models with semantic roles", "authors": [ { "first": "H", "middle": [], "last": "Llorens", "suffix": "" }, { "first": "E", "middle": [], "last": "Saquete", "suffix": "" }, { "first": "B", "middle": [], "last": "Navarro-Colorado", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "725--733", "other_ids": {}, "num": null, "urls": [], "raw_text": "Llorens, H., Saquete, E., & Navarro-Colorado, B. (2010). TimeML events recognition and classification: learning CRF models with semantic roles. In Proceedings of the 23rd International Conference on Computational Linguistics (pp. 725-733). Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Introduction to Information Retrieval", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "P", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D., Raghavan, P., & Sch\u00fctze, H. (2008). Introduction to Information Retrieval. Cambridge University Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MALLET: A Machine Learning for Language Toolkit", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McCallum, A. K. (2002). MALLET: A Machine Learning for Language Toolkit.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Knowledge-rich Word Sense Disambiguation rivaling supervised systems", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" }, { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1522--1531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ponzetto, S. P., & Navigli, R. (2010). Knowledge-rich Word Sense Disambiguation rivaling supervised systems. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (pp. 1522-1531). Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "TERQAS: Time and Event Recognition for Question Answering Systems", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ARDA Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J. (2002). TERQAS: Time and Event Recognition for Question Answering Systems. In Proceedings of ARDA Workshop.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "TimeML: Robust Specification of Event and Temporal Expressions in Text", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "J", "middle": [], "last": "Casta\u00f1o", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "R", "middle": [], "last": "Saur\u00ed", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "A", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "G", "middle": [], "last": "Katz", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 5th International Workshop on Computational Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J., Casta\u00f1o, J., Ingria, R., Saur\u00ed, R., Gaizauskas, R., Setzer, A., & Katz, G. (2003). TimeML: Robust Specification of Event and Temporal Expressions in Text. In Proceedings of the 5th International Workshop on Computational Semantics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The TIMEBANK Corpus", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" }, { "first": "R", "middle": [], "last": "Saur\u00ed", "suffix": "" }, { "first": "A", "middle": [], "last": "See", "suffix": "" }, { "first": "R", "middle": [], "last": "Gaizauskas", "suffix": "" }, { "first": "A", "middle": [], "last": "Setzer", "suffix": "" }, { "first": "M", "middle": [], "last": "Lazo", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Corpus Linguistics 2003 conference", "volume": "", "issue": "", "pages": "647--656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J., Hanks, P., Saur\u00ed, R., See, A., Gaizauskas, R., Setzer, A., \u2026 Lazo, M. (2003). The TIMEBANK Corpus. In Proceedings of the Corpus Linguistics 2003 conference (pp. 647-656).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Temporal and Event Information in Natural Language Text", "authors": [ { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" }, { "first": "R", "middle": [], "last": "Knippen", "suffix": "" }, { "first": "J", "middle": [], "last": "Littman", "suffix": "" }, { "first": "R", "middle": [], "last": "Saur\u00ed", "suffix": "" } ], "year": 2007, "venue": "Computing Meaning", "volume": "83", "issue": "", "pages": "301--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pustejovsky, J., Knippen, R., Littman, J., & Saur\u00ed, R. (2007). Temporal and Event Information in Natural Language Text. In H. Bunt, R. Muskens, L. Matthewson, Y. Sharvit, & T. E. Zimmerman (Eds.), Computing Meaning (Vol. 83, pp. 301- 346). Springer Netherlands.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evita: a robust event recognizer for QA systems", "authors": [ { "first": "R", "middle": [], "last": "Saur\u00ed", "suffix": "" }, { "first": "R", "middle": [], "last": "Knippen", "suffix": "" }, { "first": "M", "middle": [], "last": "Verhagen", "suffix": "" }, { "first": "J", "middle": [], "last": "Pustejovsky", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "700--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saur\u00ed, R., Knippen, R., Verhagen, M., & Pustejovsky, J. (2005). Evita: a robust event recognizer for QA systems. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (pp. 700-707). Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Data Mining and Statistics for Decision Making", "authors": [ { "first": "S", "middle": [], "last": "Tuff\u00e9ry", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuff\u00e9ry, S. (2011). Data Mining and Statistics for Decision Making (2nd ed.). John Wiley & Sons.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Data Mining: Practical Machine Learning Tools and Techniques", "authors": [ { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "E", "middle": [], "last": "Frank", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Witten, I. H., Frank, E., & Hall, M. A. (2011). Data Mining: Practical Machine Learning Tools and Techniques (3rd ed.). San Francisco, CA, USA: Morgan Kaufmann.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Feature Selection in Hierarchical Feature Space" }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Feature Tree Generation Algorithm" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Figure 4." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Paths between the root and the leaf nodes in a feature tree" }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "Tree-Based Feature Selection Algorithm" }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "Performance change with feature set reduction in event recognition in each of the feature selection methods (a) Verb Event Type Classification (b) Noun Event Type Classification" }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "Performance change with feature set reduction in event type classification in each of the feature selection methods For the type classification task," }, "TABREF5": { "type_str": "table", "content": "
Event Recognition (SVM)
IG\u03c7 2TSELTSEL +\u03c72
# features202,495255,37166,578 (-73.93%)64,041 (-74.92%)
P0.82770.80480.73380.8128
R0.84060.85920.88060.8576
F10.83410.83110.80050.8346
Type Classification (SVM)
IG\u03c7 2TSELTSEL +\u03c72
# features291,408267,226121,793 (-54.42%)108,705 (-59.32%)
P0.61890.61790.66330.6833
R0.69310.65310.67900.6700
F10.65390.63500.67110.6766
", "num": null, "html": null, "text": "Comparisons in time and effectiveness for event recognition and type classification" } } } }