|
{ |
|
"paper_id": "I13-1006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:15:27.958446Z" |
|
}, |
|
"title": "Global Model for Hierarchical Multi-Label Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "Yugo", |
|
"middle": [], |
|
"last": "Murawaki", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": {} |
|
}, |
|
"email": "murawaki@i.kyoto-u.ac.jp" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The main challenge in hierarchical multilabel text classification is how to leverage hierarchically organized labels. In this paper, we propose to exploit dependencies among multiple labels to be output, which has been left unused in previous studies. To do this, we first formalize this task as a structured prediction problem and propose (1) a global model that jointly outputs multiple labels and (2) a decoding algorithm for it that finds an exact solution with dynamic programming. We then introduce features that capture inter-label dependencies. Experiments show that these features improve performance while reducing the model size.", |
|
"pdf_parse": { |
|
"paper_id": "I13-1006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The main challenge in hierarchical multilabel text classification is how to leverage hierarchically organized labels. In this paper, we propose to exploit dependencies among multiple labels to be output, which has been left unused in previous studies. To do this, we first formalize this task as a structured prediction problem and propose (1) a global model that jointly outputs multiple labels and (2) a decoding algorithm for it that finds an exact solution with dynamic programming. We then introduce features that capture inter-label dependencies. Experiments show that these features improve performance while reducing the model size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Hierarchical organization of a large collection of data has deep roots in human history (Berlin, 1992) . The emergence of electronically-available text has enabled us to take computational approaches to real-world hierarchical text classification tasks. Such text collections include patents, 1 medical taxonomies 2 and Web directories such as Yahoo! and the Open Directory Project. 3 In this paper, we focus on multi-label classification, in which a document may be given more than one label.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 102, |
|
"text": "(Berlin, 1992)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 384, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Hierarchical multi-label text classification is a challenging task because it typically involves thousands of labels and an exponential number of output candidates. For efficiency, divide-andconquer strategies have often been adopted. Typically, the label hierarchy is mapped to a set of local classifiers, which are invoked in a top-down fashion (Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006; Wang et al., 2011; Sasaki and Weissenbacher, 2012) . However, local search is difficult to harness because a chain of local decisions often leads to what is usually called error propagation (Bennett and Nguyen, 2009) . To alleviate this problem, previous work has resorted to what we collectively call post-training adjustment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 383, |
|
"text": "(Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 402, |
|
"text": "Wang et al., 2011;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 434, |
|
"text": "Sasaki and Weissenbacher, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 600, |
|
"text": "(Bennett and Nguyen, 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One characteristic of the task that has not been explored in previous studies is that multiple labels to be output have dependencies among them. It is difficult even for human annotators to decide how many labels they choose. We conjecture that they consult the label hierarchy when adjusting the number of output labels. For example, if two label candidates are positioned proximally in the hierarchy, human annotators may drop one of them because they provide overlapping information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose to exploit inter-label dependencies. To do this, we first formulate hierarchical multi-label text classification as a structured prediction problem. We propose a global model that jointly predicts a set of labels. Under this framework, we replace local search with dynamic programming to find an exact solution. This allows us to extend the model with features for inter-label dependencies. Instead of locally training a set of classifiers, we also propose global training to find globally optimal parameters. Experiments show that these features improve performance while reducing the model size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In hierarchical multi-label text classification, our goal is to assign to a document a set of labels m \u2282 L that best represents the document. The pre-defined set of labels L is organized as a tree as illustrated in Figure 1 . 4 In our task, only the", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 227, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 223, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "ROOT A B BA BB ROOT !A ROOT !B B !BB B !BA A !AB A !AA AA AB", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Figure 1: Example of label hierarchy. Leaf nodes, filled in gray, represent labels to be assigned to documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "leaf nodes (AA, AB, BA and BB in this example) represent valid labels. Let leaves(c) be a set of the descendants of c, inclusive of c, that are leaf nodes. For example, leaves(A) = {AA, AB}. p \u2192 c denotes an edge from parent p to child c. Let path(c) be a set of edges that connect ROOT to c. For example, path(AB) = {ROOT \u2192 A, A \u2192 AB}. Let tree(m) = \u222a l\u2208m path(l). It corresponds to a subtree that covers m. For example, tree({AA, AB}) = {ROOT \u2192 A, A \u2192 AA, A \u2192 AB}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume that each document x is transformed into a feature vector by \u03d5(x). For example, we can use a bag-of-words representation of x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We consider a supervised setting. The training data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "T = {(x i , m i )} T i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is used to train our models. Their performance is measured on test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We begin with the flat model, one of the simplest models in multi-label text classification. It ignores the label hierarchy and relies on a set of binary classifiers, each of which decides whether label l is to be assigned to document x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Various models have been used to implement binary classifiers, including Na\u00efve Bayes, Logistic Regression and Support Vector Machines. We use the Perceptron family of algorithms, and it will be extended later to handle more complex structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The binary classifier for label l is associated with a weight vector w l . If w l \u2022 \u03d5(x) > 0, then l is assigned to x. Note that at least one label is assigned to x. If no labels have positive scores, we choose one label with the highest score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To optimize w l , we convert the original training Finin, 1999; LSHTC3, 2012) . We leave it for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 63, |
|
"text": "Finin, 1999;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 64, |
|
"end": 77, |
|
"text": "LSHTC3, 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Passive-Aggressive algorithm for training a binary classifier (PA-I).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "training data T l = {(x i , y i )} T i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Output: weight vector w l 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "w l \u2190 0 2: for n = 1..N do 3: shuffle T l 4: for all (x, y) \u2208 T l do 5: l \u2190 max{0, 1 \u2212 y(w l \u2022 \u03d5(x))} 6: if l > 0 then 7: \u03c4 \u2190 min{C, l \u2225\u03d5(x)\u2225 2 } 8: w l \u2190 w l + \u03c4 y\u03d5(x) 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "end if 10: end for 11: end for data T into T l .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "T l = { (x i , y i ) y i = +1 if l \u2208 m i y i = \u22121 otherwise } T i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Each document is treated as a positive example if it has label l; otherwise it is a negative example. Since local classifiers are independent of each other, we can trivially parallelize training. We employ the Passive-Aggressive algorithm for training (Crammer et al., 2006 ). Specifically we use PA-I. The pseudo-code is given in Algorithm 1. We set the aggressiveness parameter C as 1.0.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 273, |
|
"text": "(Crammer et al., 2006", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flat Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Unlike the flat model, the tree model exploits the label hierarchy. Each local classifier is now associated with an edge p \u2192 c of the label hierarchy and has a weight vector w p\u2192c . If w p\u2192c \u2022\u03d5(x) > 0, it means that x would belong to descendant(s) of c. Edge classifiers are independent of each other and can be trained in parallel.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We consider two ways of constructing training data T p\u2192c . ALL -All training data are used as before.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T p\u2192c = \uf8f1 \uf8f2 \uf8f3 (x i , y i )| y i = +1if \u2203l \u2208 m i , l \u2208 leaves(c) y i = \u22121otherwise \uf8fc \uf8fd \uf8fe T i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Each document is treated as a positive example if it belongs to a leaf node of c, and the rest is negative examples (Punera and Ghosh, 2008) . SIB -Negative examples are restricted documents that belong to the leaves of c's siblings. for all c such that c is a child of p do 5:", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 140, |
|
"text": "(Punera and Ghosh, 2008)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T p\u2192c = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 (x, y)| y = +1 if \u2203l \u2208 m, l \u2208 leaves(c) y = \u22121 if \u2203l \u2208 m, l \u2208 leaves(p) and l / \u2208 leaves(c) \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "t \u2190 t \u222a {(c, wp\u2192c \u2022 \u03d5(x))} 6: end for 7: u \u2190 {(c, s) \u2208 t|s > 0} 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "if u is empty then 9: This leads to a compact model because low-level edges, which are overwhelming in number, have much smaller training data than high-level edges. This is a preferred choice in previous studies (Liu et al., 2005; Wang et al., 2011; Sasaki and Weissenbacher, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "(Liu et al., 2005;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 250, |
|
"text": "Wang et al., 2011;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 282, |
|
"text": "Sasaki and Weissenbacher, 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "u \u2190 {(c, s)} such", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In previous studies, the tree model is usually accompanied with top-down local search for decoding (Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006; Wang et al., 2011; Sasaki and Weissenbacher, 2012) . 5 Algorithm 2 is a basic form of top-down local search. At each node, we select children to which edge classifiers return positive scores (Lines 4-7). However, if no children have positive scores, we select one child with the highest score (Lines 8-10). We repeat this until we reach leaves. The decoding of the flat model can be seen as a special case of this search.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 135, |
|
"text": "(Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 154, |
|
"text": "Wang et al., 2011;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 186, |
|
"text": "Sasaki and Weissenbacher, 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 190, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-down Local Search", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Top-down local search is greedy, hierarchical pruning. If a higher-level classifier drops a child node, we no longer consider its descendants as output candidates. This drastically reduces the number of local classifications in comparison with the flat model. At the same time, however, this is a source of errors. In fact, a chain of local decisions accumulates errors, which is known as error propagation (Bennett and Nguyen, 2009) . If the decision by a higher-level classifier was wrong, the model has no way of recovering from the error.", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 433, |
|
"text": "(Bennett and Nguyen, 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-down Local Search", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To alleviate this problem, various modifications have been proposed, which we collectively call post-training adjustment. Sasaki and Weissenbacher (2012) combined broader candidate generation with post-hoc pruning. They first generated a larger number of candidates by setting a negative threshold (e.g., \u22120.2) instead of 0 in Line 7. Then they filtered out unlikely labels by setting another threshold on the sum of (sigmoid-transformed) local scores of each candidate's path. S-cut (Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006; Wang et al., 2011) adjusts the threshold for each classifier. R-cut selects top-r candidates either globally (Liu et al., 2005; Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006) or at each parent node (Wang et al., 2011) . Wang et al. (2011) developed a meta-classifier which classified a root-to-leaf path using sigmoid-transformed local scores and some additional features. All these methods assume that the models themselves are inherently imperfect and must be supplemented by additional parameters which are tuned manually or by using development data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 484, |
|
"end": 520, |
|
"text": "(Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 521, |
|
"end": 539, |
|
"text": "Wang et al., 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 648, |
|
"text": "(Liu et al., 2005;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 684, |
|
"text": "Montejo-R\u00e1ez and Ure\u00f1a-L\u00f3pez, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 727, |
|
"text": "(Wang et al., 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 748, |
|
"text": "Wang et al. (2011)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Top-down Local Search", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We see hierarchical multi-label text classification as a structured prediction problem. We propose a global model that jointly predicts m, or tree(m).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "score(x, m) = w \u2022 \u03a6(x, tree(m)) w can be constructed simply by combining local edge classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "w = w ROOT\u2192A \u2295 w ROOT\u2192B , \u2022 \u2022 \u2022 , \u2295w B\u2192BB", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Its corresponding feature function \u03a6(x, tree(m)) returns copies of \u03d5(x), each of which corresponds to an edge of the label hierarchy. Thus score(x, m) can be reformulated as follows.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "score(x, m) = \u2211 p\u2192c\u2208tree(m) w p\u2192c \u2022 \u03d5(x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Now we want to find m that maximizes the global score, argmax m score(x, m).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "With the global model, we can confirm that local search is a major source of errors. In preliminary experiments, we trained local edge classifiers on ALL data and combined the resultant classifiers to create a global model. For 33% of documents in the same dataset, local search found sets of labels whose global scores were lower than the corresponding correct sets of labels. if c is a leaf then 4: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "u \u2190 u \u222a {({c}, wp\u2192c \u2022 \u03d5(x))} 5: else 6: (m \u2032 , s \u2032 ) \u2190 MAXTREE(x, c) 7: u \u2190 u \u222a {(m \u2032 , s \u2032 + wp\u2192c \u2022 \u03d5(x))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We show that an exact solution for the global model can be found by dynamic programming. 6 The pseudo-code is given in Algorithm 3. MAXTREE(x, p) recursively finds a subtree that maximizes the score rooted by p, and thus we invoke MAXTREE(x, ROOT). For p, each child c is associated with (1) a set of labels that maximizes the score of the subtree rooted by c and (2) its score (Lines 3-8). The score of c is the sum of c's tree score and the score of the edge p \u2192 c. A leaf's tree score is zero.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 90, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To maximize p's tree score, we select all children that add positive scores to the parent (Line 10). If no children add positive scores, we select one child that gives the highest score (Lines 11-13). Again, the flat model can be seen as a special case of this algorithm. The selected children correspond to p's label set and score (Lines 14-15).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A possible extension to this algorithm is to output N -best label sets. Since our algorithm is much easier than bottom-up parsing (McDonald et al., 2005) , it would not be so difficult (Collins and Koo, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "(McDonald et al., 2005)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 208, |
|
"text": "(Collins and Koo, 2005)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Dynamic programming resolves the search problem. We no longer require post-training adjustment. It allows us to concentrate on improving the model itself. 6 Bennett and Nguyen (2009) proposed a similar method, but neither global model nor global training was considered. In their method, the scores of lower-level classifiers were incorporated as meta-features of a higher-level classifier. All these classifiers were trained locally and required burdensome cross-validation techniques.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 156, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Algorithm 4 Modification to incorporate branching features. Replace Lines 10-15 of Algorithm 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "10: r \u2190 u sorted by s in descending order 11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "r \u2032 \u2190 {}, s \u2032 \u2190 0, m \u2032 \u2190 {} 12: for k = 1..size of r do 13: (m, s) \u2190 r[k] 14: s \u2032 \u2190 s \u2032 + s, m \u2032 \u2190 m \u2032 \u222a m 15: r \u2032 \u2190 r \u2032 \u222a {(m \u2032 , s \u2032 + wBF \u2022 \u03d5BF(p, k))} 16", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ": end for 17: (m, s) \u2190 item in r \u2032 that has the highest s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Programming", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Now we are ready to exploit inter-label dependencies. We introduce branching features, a simple but powerful extension to the global model. They influence how many children a node selects. The corresponding function is \u03d5 BF (p, k) , where p is a non-leaf node and k is the number of children to be selected for p. To avoid sparsity, we choose one of R + 1 features (1, \u2022 \u2022 \u2022 , R or >R) for some pre-defined R. To be precise, we fire two features per non-leaf node: one is node-specific and the other is shared among non-leaf nodes. As a result, we append at most (I + 1)(R + 1) features to the global weight vector, where I is the number of non-leaf nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 230, |
|
"text": "\u03d5 BF (p, k)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inter-label Dependencies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "All we have to do to incorporate branching features is to replace Lines 10-15 of Algorithm 3 with Algorithm 4. For given k, we first need to select k children that maximize the sum of the scores. This can by done by sorting children by score and select the first k children. We then add a score of branching features w BF \u2022 \u03d5 BF (p, k) (Line 15). Finally we chose a candidate with the highest score (Line 17).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inter-label Dependencies", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Up to this point, the global model is constructed by combining locally trained classifiers. Of course, we can directly train the global model. In fact we cannot incorporate branching features without global training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Training", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Algorithm 5 shows a Passive-Aggressive algorithm for the structured output (Crammer et al., 2006) . We can find an exact solution under the current weight vector by dynamic programming (Line 5). 7 The cost \u03c1 reflects the degree to which the model's prediction was wrong. It is based on the end for 13: end for example-based F measure, which will be reviewed in Section 5.3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 97, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 196, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Training", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Note that what are called \"global\" in some previous studies are in fact path-based methods (Qiu et al., 2009; Qiu et al., 2011; Wang et al., 2011; Sasaki and Weissenbacher, 2012) . In contrast, we present tree-wide optimization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 109, |
|
"text": "(Qiu et al., 2009;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 127, |
|
"text": "Qiu et al., 2011;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 146, |
|
"text": "Wang et al., 2011;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 178, |
|
"text": "Sasaki and Weissenbacher, 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Training", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "One problem with global training is speed. We can no longer train local classifiers in parallel because global training makes the model monolithic. Even worse, label set prediction is orders of magnitude slower than a binary classification. For these reasons, global training is extremely slow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallelization of Global Training", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We resort to iterative parameter mixing (Mc-Donald et al., 2010) . The basic idea is to split training data into small \"shards\" instead of subdividing the model. Algorithm 6 gives a pseudocode, where S is the number of shards. We perform training on each shard in parallel. At the end of each iteration, we average the models and use the resultant model as the initial value for the next iteration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 64, |
|
"text": "(Mc-Donald et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallelization of Global Training", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Iterative parameter mixing was originally proposed for Perceptron training. However, as Mc-Donald et al. (2010) noted, it is possible to provide theoretical guarantees for distributed online Passive-Aggressive learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 111, |
|
"text": "Mc-Donald et al. (2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parallelization of Global Training", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We used JSTPlus, a bibliographic database on science, technology and medicine built by Japan Science and Technology Agency (JST ws \u2190 asynchronously call Algorithm 5 with some modifications: T is replaced with Ss, w is initialized with w instead of 0, and N is set as 1. 6: end for 7: join 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "w \u2190 1 S \u2211 S s=1 ws 9: end for ment consisted of a title, an abstract, a list of authors, a journal name, a set of categories and many other fields. For experiments, we selected a set of documents that (1) were dated 2010 and (2) contained both Japanese title and abstract. As a result, we obtained 455,311 documents, which were split into 409,892 documents for training and 45,419 documents for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The number of labels was 3,209, which amounts to 4,030 edges. All the leave nodes are located at the fifth level (the root not counted). Some edges skip intermediate levels (e.g., children of a secondlevel node are located at the fourth level). On average 1.85 categories were assigned to a document, with a variance of 0.85. The maximum number of categories per document was 9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the feature representation of a document \u03d5(x), we employed two types of features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In addition to the flat model (FLAT), the tree model with various configurations was compared. We performed local training of edge classifiers on ALL data and SIB data as explained in Section 3.2. We applied top-down local search (LS) and dynamic programming (DP) for decoding. We also performed global training (GT) with and without branching features (BF).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We performed 10 iterations for training local classifiers. For iterative parameter mixing described in Section 4.5, we evenly split the training data into 10 shards and ran 10 iterations. For branching features introduced in Section 4.3, we set R = 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Various evaluation measures have been proposed to handle multiple labels. The first group of evaluation measures we adopted is document-oriented measures often referred to as example-based measures (Godbole and Sarawagi, 2004; Tsoumakas et al., 2010) . The example-based precision (EBP), recall (EBR) and F measure (EBF) are defined as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 226, |
|
"text": "(Godbole and Sarawagi, 2004;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "Tsoumakas et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "EBP = 1 T T \u2211 i=1 |m i \u2229m i | |m i | EBR = 1 T T \u2211 i=1 |m i \u2229m i | |m i | EBF = 1 T T \u2211 i=1 2|m i \u2229m i | |m i | + |m i |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where T is the number of documents in the test data, m i is a set of correct labels of the i-th document andm i is a set of labels predicted by the model. Another group of measures are called labelbased (LB) and are based on the precision, recall and F measure of each label (Tsoumakas et al., 2010) . Multiple label scores are combined by performing macro-averaging (Ma) or microaveraging (Mi), resulting in six measures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 299, |
|
"text": "(Tsoumakas et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Lastly we used hierarchical evaluation measures to give some scores to \"partially correct\" labels (Kiritchenko, 2005 ). If we assume a tree instead of a more general directed acyclic graph, we can formulate the (micro-average) hierarchical precision (hP) and hierarchical recall (hR) as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 116, |
|
"text": "(Kiritchenko, 2005", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "hP = \u2211 T i=1 |tree(m i ) \u2229 tree(m i )| \u2211 T i=1 |tree(m i )| hR = \u2211 T i=1 |tree(m i ) \u2229 tree(m i )| \u2211 T i=1 |tree(m i )|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The hierarchical F measure (hF) is the harmonic mean of hP and hR. Table 1 shows the performance comparison of various models. DP-GT-BF performed best in 5 measures. Compared with FLAT, DP-GT-BF drastically improved LBMiP and hP. Branching features consistently improved F measures. The tree model with local search was generally outperformed by the flat model. Compared with FLAT, DP-ALL and DP-GT, DP-GT-BF yielded statistically significant improvements with p < 0.01. DP-ALL outperformed LS-ALL for all but one measures. DP-SIB performed extremely poorly while DP-ALL was competitive with DP-GT-BF. This is in sharp contrast to the pair of LS-ALL and LS-SIB, which performed similarly. Dynamic programming forced DP-SIB's local classifiers to classify what were completely new to them because they had been trained only on small portions of data. The result was highly unpredictable.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 74, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As expected, dynamic programming was much slower than local search. In fact DP-GT-BF was more than 60 times slower than local search. Somewhat surprisingly, it took only 18% more time than FLAT. This may be explained by the fact that DP-GT-BF was 16% smaller in size than FLAT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Although DP-ALL was competitive with DP-GT and DP-GT-BF, it is notable that global training yielded much smaller models. Branching features brought further model size reduction along with almost consistent performance improvement. This result seems to support our hypothesis concerning the decision-making process of the human annotators. They do not select each label independently but consider the relative importance among competing labels. Table 2 : Performance on the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 444, |
|
"end": 451, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "than DP-GT-BF although they were outperformed on the test data. It seems safe to conclude that local training caused overfitting. We further investigated the models by decomposing them into edges. Figure 2 compares three models. The first three figures (a-c) report the number of non-trivial elements in each weight vector. Edges are grouped by the level of child nodes. Although DP-GT-BR was much smaller in total size than DP-ALL, the per-edge size distributions looked alike. The higher the level was, the larger number of non-trivial features each model required. Compared with DP-SIB, DP-GT-BR had compact local classifiers for the highest-level edges but the rest was generally larger. Intuitively, knowing its siblings is not enough for each local classifier, but it does not need to know all possible rivals.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 205, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "The last three figures (d-f) report the averaged absolute scores of each edge that were calculated from the model output for the test data. By doing this, we would like to measure how edges of various levels affect the model output. Higher-level edges tended to have larger impact. However, we can see that in DP-GT-BR, their impact was relatively small. In other words, lower-level edges played more important roles in DP-GT-BR than in other models. Figure 3 shows a heat map representation of the weight vector for DP-GT-BR's branching features. The value of each item is the weight value averaged over parent nodes. All averaged weight values were negative. The penalty monotonically increased with the number of children. It is not easy to compare different levels of nodes because weight values depended on other parts of the weight vector. However, the fact that lower-level nodes marked sharper contrasts between small and large number of children appears to support our Table 1. hypothesis about the competitive nature of label candidates positioned proximally in the label hierarchy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 451, |
|
"end": 459, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 978, |
|
"end": 986, |
|
"text": "Table 1.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "In this paper, we treated hierarchical multi-label text classification as a structured prediction problem. Under this framework, we proposed (1) dynamic programming that finds an exact solution, (2) global training and (3) branching features that capture inter-label dependencies. Branching features improve performance while reducing the model size. This result suggests that the selection of multiple labels by human annotators greatly depends on the relative importance among competing labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Exploring features that capture other types of inter-label dependencies is a good research direction. For example, \"Others\" labels probably behave atypically in relation to their siblings. While we focus on the setting where only the leaf nodes represent valid labels, internal nodes are sometimes used as valid labels. Such internal nodes often block the selection of their descendants. Also, we would like to work on directed acyclic graphs and to improve scalability in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://www.wipo.int/classifications/ en/ 2 http://www.nlm.nih.gov/mesh/ 3 http://www.dmoz.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some studies work on directed acyclic graphs (DAGs), in which each node can have more than one parent(Labrou and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For other methods,Punera and Ghosh (2008) postprocess local classifier outputs by isotonic tree regression.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "If we want for some reason to stick to local search, we need to address the problem of \"non-violation.\" With inexact search, the model predictionm may have a lower score than correct m, making the update invalid. Several methods have been proposed to solve this problem(Collins and Roark, 2004;Huang et al., 2012).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ". Journal name (binary). One feature was fired per document.2. Content words in the title and abstract(frequency-valued).Frequencies of the words in the title were multiplied by two.To extract content words, we first applied the morphological analyzer JUMAN 9 to each sentence to segment it into a word sequence. From each word sequence, we selected content words using the dependency parser KNP, 10 which tagged content words at a pre-processing step. Each document contained 380 characters on average, which corresponded to 120 content words according to JU-MAN and KNP.9 http://nlp.ist.i.kyoto-u.ac.jp/EN/ index.php?JUMAN 10 http://nlp.ist.i.kyoto-u.ac.jp/EN/ index.php?KNP", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the Department of Databases for Information and Knowledge Infrastructure, Japan Science and Technology Agency for providing JST-Plus and helping us understand the database. This work was partly supported by JST CREST.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Refined experts: improving classification in large taxonomies", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Bennett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 32nd international ACM SI-GIR conference on Research and development in information retrieval, SIGIR '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul N. Bennett and Nam Nguyen. 2009. Refined ex- perts: improving classification in large taxonomies. In Proceedings of the 32nd international ACM SI- GIR conference on Research and development in in- formation retrieval, SIGIR '09, pages 11-18.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Ethnobiological classification: principles of categorization of plants and animals in traditional societies", |
|
"authors": [ |
|
{ |
|
"first": "Brent", |
|
"middle": [], |
|
"last": "Berlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brent Berlin. 1992. Ethnobiological classification: principles of categorization of plants and animals in traditional societies. Princeton University Press.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Discriminative reranking for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Koo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational Linguistics", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "25--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computa- tional Linguistics, 31(1):25-70.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Incremental parsing with the Perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins and Brian Roark. 2004. Incremen- tal parsing with the Perceptron algorithm. In Pro- ceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 111-118.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Shai Shalev-Shwartz, and Yoram Singer", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discriminative methods for multi-labeled classification", |
|
"authors": [ |
|
{ |
|
"first": "Shantanu", |
|
"middle": [], |
|
"last": "Godbole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Knowledge Discovery and Data Mining", |
|
"volume": "3056", |
|
"issue": "", |
|
"pages": "22--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shantanu Godbole and Sunita Sarawagi. 2004. Dis- criminative methods for multi-labeled classifica- tion. In Honghua Dai, Ramakrishnan Srikant, and Chengqi Zhang, editors, Advances in Knowl- edge Discovery and Data Mining, volume 3056 of Lecture Notes in Computer Science, pages 22-30.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Structured Perceptron with inexact search", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suphan", |
|
"middle": [], |
|
"last": "Fayong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured Perceptron with inexact search. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-151.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hierarchical Text Categorization and Its Application to Bioinformatics", |
|
"authors": [ |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Svetlana Kiritchenko. 2005. Hierarchical Text Cat- egorization and Its Application to Bioinformatics. Ph.D. thesis, University of Ottawa.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Yahoo! as an ontology: using Yahoo! categories to describe documents", |
|
"authors": [ |
|
{ |
|
"first": "Yannis", |
|
"middle": [], |
|
"last": "Labrou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Finin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the eighth international conference on Information and knowledge management, CIKM '99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "180--187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yannis Labrou and Tim Finin. 1999. Yahoo! as an ontology: using Yahoo! categories to describe doc- uments. In Proceedings of the eighth international conference on Information and knowledge manage- ment, CIKM '99, pages 180-187.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Support vector machines classification with a very largescale taxonomy", |
|
"authors": [ |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua-Jun", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "SIGKDD Explorations Newsletter", |
|
"volume": "7", |
|
"issue": "1", |
|
"pages": "36--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. 2005. Support vector machines classification with a very large- scale taxonomy. SIGKDD Explorations Newsletter, 7(1):36-43, June.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "LSHTC3. 2012. ECML/PKDD-2012 Discovery Challenge Workshop on Large-Scale Hierarchical Text Classification", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LSHTC3. 2012. ECML/PKDD-2012 Discovery Chal- lenge Workshop on Large-Scale Hierarchical Text Classification.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--98", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of the 43rd An- nual Meeting of the Association for Computational Linguistics (ACL'05), pages 91-98.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Distributed training strategies for the structured Perceptron", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gideon", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "456--464", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured Per- ceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 456-464.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Selection strategies for multi-label text categorization", |
|
"authors": [ |
|
{ |
|
"first": "Arturo", |
|
"middle": [], |
|
"last": "Montejo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-R\u00e1ez", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis Alfonso Ure\u00f1a-L\u00f3pez", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "585--592", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arturo Montejo-R\u00e1ez and Luis Alfonso Ure\u00f1a-L\u00f3pez. 2006. Selection strategies for multi-label text cate- gorization. In Advances in Natural Language Pro- cessing, pages 585-592. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Enhanced hierarchical classification via isotonic smoothing", |
|
"authors": [ |
|
{ |
|
"first": "Kunal", |
|
"middle": [], |
|
"last": "Punera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joydeep", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 17th international conference on World Wide Web, WWW '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kunal Punera and Joydeep Ghosh. 2008. Enhanced hierarchical classification via isotonic smoothing. In Proceedings of the 17th international conference on World Wide Web, WWW '08, pages 151-160.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Hierarchical multi-label text categorization with global margin maximization", |
|
"authors": [ |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjun", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of the ACL-IJCNLP Short", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "165--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xipeng Qiu, Wenjun Gao, and Xuanjing Huang. 2009. Hierarchical multi-label text categorization with global margin maximization. In Proc. of the ACL- IJCNLP Short, pages 165-168.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Hierarchical text classification with latent concepts", |
|
"authors": [ |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuanjing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinlong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "598--602", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xipeng Qiu, Xuanjing Huang, Zhao Liu, and Jinlong Zhou. 2011. Hierarchical text classification with latent concepts. In Proc. of ACL, pages 598-602.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "TTI'S system for the LSHTC3 challenge", |
|
"authors": [], |
|
"year": null, |
|
"venue": "ECML/PKDD-2012 Discovery Challenge Workshop on Large-Scale Hierarchical Text Classification", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "TTI'S system for the LSHTC3 challenge. In ECML/PKDD-2012 Discovery Challenge Workshop on Large-Scale Hierarchical Text Classification.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Mining multi-label data", |
|
"authors": [], |
|
"year": 2010, |
|
"venue": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "667--685", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2010. Mining multi-label data. In Oded Maimon and Lior Rokach, editors, Data Mining and Knowledge Discovery Handbook, pages 667-685. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Enhance top-down method with meta-classification for very large-scale hierarchical classification", |
|
"authors": [ |
|
{ |
|
"first": "Xiao-Lin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bao-Liang", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1089--1097", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao-Lin Wang, Hai Zhao, and Bao-Liang Lu. 2011. Enhance top-down method with meta-classification for very large-scale hierarchical classification. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1089-1097.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "MAXTREE(x, p) Input: document x, tree node p Output: label set m, score s 1: u \u2190 {} 2: for all c in the children of p do 3:" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Passive-Aggressive algorithm for global training (PA-I, prediction-based updates).Input: training data T = {(xi, mi)tree(m))\u2212\u03a6(x,tree(m))\u2225 2 } 10:w \u2190 w + \u03c4 (\u03a6(x, tree(m)) \u2212 \u03a6(x, tree(m" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Heat map of the weight vector for branching features. R is the root level." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "nodes (f) DP-GT-BR." |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Comparison of model sizes and scores per edge. The definition of size is the same as that in" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Algorithm 6 Input: training data T = {(xi, mi)} T i=1</td></tr><tr><td colspan=\"2\">Output: weight vector w</td></tr><tr><td colspan=\"2\">1: split T into S1, \u2022 \u2022 \u2022 SS</td></tr><tr><td colspan=\"2\">2: w \u2190 0</td></tr><tr><td colspan=\"2\">3: for n = 1..N do</td></tr><tr><td>4:</td><td>for s = 1..S do</td></tr><tr><td>5:</td><td/></tr><tr><td>). 8 Each docu-</td><td/></tr><tr><td>8 http://www.jst.go.jp/EN/menu3/01.html</td><td/></tr></table>", |
|
"text": "Iterative parameter mixing for global training." |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>shows the performance of several models</td></tr><tr><td>on the training data. It is interesting that FLAT and</td></tr><tr><td>DP-ALL scored much higher on the training data</td></tr></table>", |
|
"text": "Performance comparison of various models. Time is the one required to classify test data. Loading time was not counted. Size is defined as the number of elements in the weight vector whose absolute values are greater than 10 \u22127 ." |
|
} |
|
} |
|
} |
|
} |