Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:06:55.705664Z"
},
"title": "Hierarchical Text Classification with Reinforced Label Assignment",
"authors": [
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"region": "IL",
"country": "USA"
}
},
"email": "yuningm2@illinois.edu"
},
{
"first": "Jingjing",
"middle": [],
"last": "Tian",
"suffix": "",
"affiliation": {},
"email": "tianjj97@pku.edu.cn"
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"region": "IL",
"country": "USA"
}
},
"email": "hanj@illinois.edu"
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "xiangren@usc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference. To solve the mismatch between training and inference as well as modeling label dependencies in a more principled way, we formulate HTC as a Markov decision process and propose to learn a Label Assignment Policy via deep reinforcement learning to determine where to place an object and when to stop the assignment process. The proposed method, HiLAP, explores the hierarchy during both training and inference time in a consistent manner and makes interdependent decisions. As a general framework, HiLAP can incorporate different neural encoders as base models for end-to-end training. Experiments on five public datasets and four base models show that HiLAP yields an average improvement of 33.4% in Macro-F1 over flat classifiers and outperforms state-of-the-art HTC methods by a large margin. 1",
"pdf_parse": {
"paper_id": "D19-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference. To solve the mismatch between training and inference as well as modeling label dependencies in a more principled way, we formulate HTC as a Markov decision process and propose to learn a Label Assignment Policy via deep reinforcement learning to determine where to place an object and when to stop the assignment process. The proposed method, HiLAP, explores the hierarchy during both training and inference time in a consistent manner and makes interdependent decisions. As a general framework, HiLAP can incorporate different neural encoders as base models for end-to-end training. Experiments on five public datasets and four base models show that HiLAP yields an average improvement of 33.4% in Macro-F1 over flat classifiers and outperforms state-of-the-art HTC methods by a large margin. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years there has been a surge of interest in leveraging hierarchies (taxonomies) to organize objects (e.g., documents), leading to the development of hierarchical text classification (HTC)-a task that aims to predict for an object multiple appropriate labels in a given label hierarchy, which together constitute a sub-tree. HTC methods have found a wide range of applications such as question answering (Qu et al., 2012) , online advertising (Agrawal et al., 2013) , and scientific literature organization (Peng et al., 2016) . In contrast to \"flat\" classification, the key challenges of HTC Figure 1 : We aim at consistent, multi-path, and nonmandatory leaf node prediction. For a Caribbean restaurant with a beer bar, inconsistent prediction may place it to node \"Beer Bars\" but not \"Bars\", which contradicts with each other; Single-path prediction may only recognize that it is a beer bar; Mandatory leaf node prediction would have to assign a leaf node \"Dominican\" even if the nation of the cuisine is uncertain. lie in modeling the large-scale, imbalanced, and in particular, structured label space.",
"cite_spans": [
{
"start": 413,
"end": 430,
"text": "(Qu et al., 2012)",
"ref_id": "BIBREF25"
},
{
"start": 452,
"end": 474,
"text": "(Agrawal et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 516,
"end": 535,
"text": "(Peng et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on how the hierarchy is explored, HTC methods can be summarized into flat, local, and global approaches (Silla and Freitas, 2011) . Flat approaches (Hayete and Bienkowska, 2005; Johnson and Zhang, 2014) assume all the labels in the given hierarchy are independent. Some predict labels at the leaf nodes and heuristically add their ancestor labels, which is problematic as the labels of some objects may not be at the leaf nodes (nonmandatory leaf node prediction, see Fig. 1 ) and all the non-leaf nodes are completely neglected. Some simply ignore the hierarchy and perform standard multi-label classification, in which label inconsistencies (one label is predicted positive but its ancestors are not) may occur and post-processing is needed to correct such contradictions. Local approaches (Koller and Sahami, 1997; Cesa-Bianchi et al., 2006) Figure 2 : An illustrative example of the label assignment policy. At t = 0, x i is placed at the root label and the policy would decide if x i should be placed to its two children (red). At t = 1, x i is placed at label \"Restaurants\", which adds its three children as the candidates. At t = 6, the stop action is taken and the label assignment is thus terminated. We then take all the labels where x i has been placed (blue) as x i 's labels. been predicted positive. One critical issue is that the number of local classifiers depends on the size of the label hierarchy, making local approaches infeasible to scale.",
"cite_spans": [
{
"start": 110,
"end": 135,
"text": "(Silla and Freitas, 2011)",
"ref_id": "BIBREF29"
},
{
"start": 154,
"end": 183,
"text": "(Hayete and Bienkowska, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 184,
"end": 208,
"text": "Johnson and Zhang, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 798,
"end": 823,
"text": "(Koller and Sahami, 1997;",
"ref_id": "BIBREF13"
},
{
"start": 824,
"end": 850,
"text": "Cesa-Bianchi et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 474,
"end": 480,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 851,
"end": 859,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Global approaches use one single classifier and model the label hierarchy more explicitly. Traditional global approaches (Wang et al., 2001; Silla Jr and Freitas, 2009) are largely based on specific flat models and often make unrealistic assumptions (Cai and Hofmann, 2004) as in flat approaches. Recent neural approaches (Kim, 2014; Yang et al., 2016) mainly focus on flat classification while their performance in HTC is relatively less studied. Even if the classification is supposed to be hierarchical, prior work (Gopal and Yang, 2013; Johnson and Zhang, 2014; Peng et al., 2018) still make flat and independent predictions or utilize simple constraints without considering the holistic quality of label assignment. One recent framework (Wehrmann et al., 2018) attempts to leverage both local and global information but it uses static features as input and its inference process is still flat.",
"cite_spans": [
{
"start": 121,
"end": 140,
"text": "(Wang et al., 2001;",
"ref_id": "BIBREF34"
},
{
"start": 141,
"end": 168,
"text": "Silla Jr and Freitas, 2009)",
"ref_id": "BIBREF30"
},
{
"start": 250,
"end": 273,
"text": "(Cai and Hofmann, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 322,
"end": 333,
"text": "(Kim, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 334,
"end": 352,
"text": "Yang et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 518,
"end": 540,
"text": "(Gopal and Yang, 2013;",
"ref_id": "BIBREF9"
},
{
"start": 541,
"end": 565,
"text": "Johnson and Zhang, 2014;",
"ref_id": "BIBREF11"
},
{
"start": 566,
"end": 584,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 742,
"end": 765,
"text": "(Wehrmann et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we formulate HTC as a Markov decision process to better capture label dependencies and measure the holistic quality of label assignment. We present HiLAP, a global framework that learns a label assignment policy to determine where to place the objects and when to stop the assignment process. HiLAP explores the label hierarchy during both training and inference in a consistent manner, which alleviates the exposure bias often found in prior local and global approaches. By learning when to stop, HiLAP is more flexible than approaches that only support mandatory leaf node prediction or require thresholding. In addition, HiLAP supports multi-path prediction and its predictions of one object on different paths are inter-dependent, which not only guarantees label consistency but matches the nature of HTC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, HiLAP estimates the holistic quality of all the labels assigned to one object via reinforcement learning instead of evaluating each label independently via maximum likelihood as in prior studies. To summarize, HiLAP achieves better effectiveness compared to flat and local approaches as it examines the label hierarchy during both training and inference. HiLAP has more flexibility and generalization capacity than previous global approaches in that it has no constraints on the structure of the hierarchy or the labels of the objects (Cai and Hofmann, 2004) , generalizes to neural representation learning models (Gopal and Yang, 2013) , and makes inter-dependent predictions while ensuring label consistency (Wehrmann et al., 2018; Peng et al., 2018) .",
"cite_spans": [
{
"start": 548,
"end": 571,
"text": "(Cai and Hofmann, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 627,
"end": 649,
"text": "(Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 723,
"end": 746,
"text": "(Wehrmann et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 747,
"end": 765,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "HiLAP can be combined with various neural encoding models and trained in an end-to-end fashion. In our experiments, we select four representative encoding models as the base models to evaluate the effectiveness of HiLAP. Experimental results on five public datasets from different domains show that combining the base models with HiLAP yields an average performance improvement of 33.4% in Macro-F1 over corresponding flat classifiers and outperforms state-ofthe-art HTC methods by a large margin. In particular, ablation study shows that HiLAP is especially beneficial to those unpopular labels at the bottom levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Hierarchical Label Assignment 2.1 Overview Problem Formulation. We define a label hierarchy H = (L, E) as a tree or DAG (directed acyclic graph)-structured hierarchy with a set of nodes (labels) L and a set of edges E indicating the parent-child relation between the labels. Taking a set of objects Figure 3 : The architecture of the proposed framework HiLAP. One CNN model (Kim, 2014) is used as the base model for illustration. The object embedding e d generated by the base model is combined with the embedding of currently assigned label l t and used as the state representation s t , based on which actions are taken by the policy network. The time corresponds to t = 1 in Fig. 2 . learn a label assignment policy P to place each object x i to its labels L i on the label hierarchy H. The label assignment is supposed to be consistent, multi-path, and non-mandatory leaf node prediction (refer to Figs. 1 and 2). We define one base model B as a mapping f that converts raw object x i to a finite dimensional vector, i.e., the object embedding e d \u2208 R D . B can be any neural representation learning model and its output e d is used as the input of P for policy learning. The major challenge, compared to standard classification setup, is that we need to model E, i.e., the relation between labels. Our Framework. Prior studies either have a mismatch between training and inference as different routines are followed in the two phases, or compute losses with respect to each individual label and make flat predictions during inference time. In contrast, we learn a policy that (1) makes consistent, inter-dependent predictions by traversing the label hierarchy and maintaining state representation;",
"cite_spans": [
{
"start": 376,
"end": 387,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 301,
"end": 309,
"text": "Figure 3",
"ref_id": null
},
{
"start": 680,
"end": 686,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "X = {x 1 , x 2 , ..., x N } and their labels L = {L 1 , L 2 , ..., L N }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) measures the holistic quality of label assignment via reinforcement learning. Specifically, the policy P puts x i at the root label in the beginning. At each time step, P decides which label x i should be further placed to, among all the children labels of where x i has been placed, until a special stop action is taken. An illustration of how HiLAP labels one object is shown in Fig. 2 and the overall architecture of HiLAP is shown in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 391,
"text": "Fig. 2",
"ref_id": null
},
{
"start": 442,
"end": 448,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe the details of policy learning including its actions, rewards, states, and the policy network in this section. We formulate HTC as a Markov decision process (MDP): at each time step, the agent observes current state, takes an action, and receives a reward. The end goal is to train a policy network to determine where to place the objects and when to stop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "Actions. Specifically, we regard the process of placing an object x i to the right positions on the label hierarchy as making a sequence of actions, where an action a t at time step t is to select one label l t+1 from the action space A t and place x i to that label l t+1 . We denote the children of label l t as C(l t ). At the beginning of each episode, x i is placed at the root label l 0 and the action space A 0 = C(l 0 ), i.e., all the labels at level 1. When x i is placed at another label l 1 , its children C(l 1 ) are then added to the action space A 1 while l 1 itself is removed. In addition, one stop action with embedding e stop \u2208 R C is included in the action space so that the model can automatically learn when to stop placing object x i to new labels. Intuitively, when the confidence of placing x i to another label is lower than the stop action, the label assignment process would be terminated. In short, the action space A t consists of all the unvisited children labels of where the object x i has been placed and the stop action. One distinction of HiLAP is that it takes the inter-dependencies of labels across different paths and levels into consideration while previous approaches make independent predictions on different paths. For example, HiLAP can first place x i to a label at level 3 if the probability of that label is high and then place it to another label at level 1 on another path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "Rewards. The agent receives scalar rewards as feedback for its actions. Different from exist-ing work where each label of one example 2 is treated independently, HiLAP measures the quality of all the labels assigned to each example x i by rewarding the agent with the Example-based F1 (see Sec. 4.1 for details of this metric). Intuitively, the agent would realize how similar the assigned and the ground-truth labels of one example are. Instead of waiting until the end of the label assignment process and comparing the predicted labels with the gold labels, we use reward shaping (Mao et al., 2018) , i.e., giving intermediate rewards at each time step, to accelerate the learning process. Specifically, we set the reward r of x i at time step t to be the difference of Example-based F1 scores between current and the last time step:",
"cite_spans": [
{
"start": 582,
"end": 600,
"text": "(Mao et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "r x i t = F1 x i t \u2212 F1 x i t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": ". If current F1 is better than that at the last time step, the reward would be positive, and vice versa. The cumulative reward from current time step to the end of an episode would cancel the intermediate rewards and thus reflect whether the current action improves the holistic label assignment or not. As a result, the learned policy would not focus on the current placement but have a long-term view that takes following actions into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "States and Policy Network. We parameterize action a t by a policy network \u03c0(a | s; W). For each object, its representation e d is generated by the base model B. For each label, a label embedding l \u2208 R C is randomly initialized and updated during training. The embeddings of the object e d and currently assigned label l t are concatenated and projected to a vector s t \u2208 R C via a two-layer feedforward network. s t has the same size as the label embedding l and is used as the state representation at time step t. By stacking the action embeddings (i.e., the embeddings of candidate labels and stop action), we obtain an action matrix A t with size |A| \u00d7 C. A t is multiplied with the state embedding s t , which outputs the probability distribution of actions. Finally, an action a t is sampled based on the probability distribution of the action space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "s t = ReLU(W 1 l ReLU(W 2 l [e d ; l t ])), \u03c0(a t | s; W) = softmax(A t s t ), a t \u223c \u03c0(a t | s; W).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "We use policy gradient (Williams, 1992) as the optimization algorithm. In addition, we adopt a selfcritical training approach (Rennie et al., 2017).",
"cite_spans": [
{
"start": 23,
"end": 39,
"text": "(Williams, 1992)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "For each object x i , two label assignments are generated:L x i is sampled from the probability distribution, andL x i , the baseline label assignment, is greedily obtained by choosing the action with the highest probability at each time step. We us\u1ebd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "r x i t = rL x i t \u2212 rL x i t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "as the actual reward, which ensures that the policy network learns to place the object to positions with higher F1 score than the greedy baseline. Formally, we measure the global loss O g as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "O g = \u2212 N i=1 T t=1 log\u03c0 x i (a t | s; W) \u00d7 v x i t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "where v x i j = T t=j \u03b3 t\u2212jr x i t is the cumulative future reward at time j and \u03b3 \u2208 [0, 1] is the discount factor. At the time of inference, we greedily select labels with the highest probability asL x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "3 End-to-End Model Learning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reinforcement Learning for Hierarchical Label Assignment",
"sec_num": "2.2"
},
{
"text": "Instead of learning from scratch, we use supervised learning to pre-train HiLAP. We denote the supervised variant as HiLAP-SL. While most parameters of HiLAP-SL are shared and used to initialize HiLAP (except that e stop is randomly initialized), its way of exploring the label hierarchy H is dissimilar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "The major difference is that HiLAP-SL explores the label hierarchy H in a top-down manner independently. At each time step t, the object goes down one level on the hierarchy and the labels under the same parent are discriminated locally. Specifically, the local per-parent label probabil-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "ity distribution p Local t is estimated as p Local t = \u03c3(C t s t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "where \u03c3 denotes the sigmoid function, and C t \u2208 R |C(lt)|\u00d7C denotes the candidate embeddings of HiLAP-SL, i.e., an embedding matrix consisting of the children of current label l t , rather than all the labels where x i has been placed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "Another difference is that in HiLAP the actions are sampled and thus might place the objects to incorrect labels, while in HiLAP-SL only the ground-truth positions are traversed during training. Specifically, if there are K(\u2265 1) groundtruth labels at the same level, the object embedding e d would be copied K times and losses on the K different paths would be measured independently (see Fig. 6 in Appendix for illustration). The local loss of HiLAP-SL is defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 395,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "O l = T t=0 O t ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "where T is the lowest label's 449 level of one example and O t estimates the binary cross entropy over the candidate labels C(l t ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "O t = \u2212 N i=1 l\u2208C(l t,i ) L i (l) \u00d7 logp Local t,i (l) + (1 \u2212 L i (l)) \u00d7 log(1 \u2212 p Local t,i (l)), where L i (l) and p Local t,i (l) evaluate label l of x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "Intuitively, HiLAP-SL works as if there were a set of local classifiers, although most of its parameters (except for the label embedding l) are shared by all the labels so that there is no need to train multiple classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-Down Supervised Pre-Training",
"sec_num": "3.1"
},
{
"text": "We further add a flat component to HiLAP as a regularization of the base model. Specifically, the flat component is a feed-forward network that projects the object embedding e d to a label probability distribution p Flat of all the labels on the hierarchy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "p Flat = \u03c3(W Flat e d ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "The combination of the base model and the flat component functions the same as a flat model and ensures that the object representation e d has the capability of flat classification. We denote the flat loss that measures the binary cross entropy over all the labels",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "by O f = \u2212 N i=1 l\u2208L L i (l) \u00d7 logp Flat i (l) + (1 \u2212 L i (l)) \u00d7 log(1 \u2212 p Flat i (l)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "Combining the flat and local losses, the supervised loss in HiLAP-SL is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "O SL = \u03bbO f + (1 \u2212 \u03bb)O l , where \u03bb \u2208 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "is the mixing ratio. Similar to Celikyilmaz et al. (2018), we also found that mixing a proportion of the supervised loss is beneficial to the learning process of HiLAP. Further combining the global information O g (i.e., O RL ), the total loss of HiLAP is defined as O = O RL + \u03b1O SL , where \u03b1 is a scaling factor accounting for the difference in magnitude between O RL and O SL . While we do not use the flat component during inference, it helps the representation learning of the base model and improves the performance of both HiLAP-SL and HiLAP (see Sec. 4.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combining Flat, Local, and Global Information for Policy Learning",
"sec_num": "3.2"
},
{
"text": "Datasets. We conduct extensive experiments on five public datasets from various domains (summarized in Table 1 and detailed in Appendix A). The first two datasets are related to news categorization, including RCV1 (Lewis et al., 2004) and the NYT annotated corpus (Sandhaus, 2008) . The third dataset is the Yelp Dataset Challenge 2018 3 . We hypothesize that one business can be represented by its reviews and use the reviews to predict business categories. The last two datasets are related to protein functional catalogue (FunCat) and gene ontology (GO) prediction (Vens et al., 2008) , which are used to test the generalization ability of HiLAP to non-textual data. For all the datasets, the lowest labels of one example may not be at the leaf nodes and there could be multiple labels at each level, making them harder and more realistic than mandatory-leaf or single-path datasets such as IPC (WIPO, 2014) and LSHTC (Partalas et al., 2015) .",
"cite_spans": [
{
"start": 264,
"end": 280,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF28"
},
{
"start": 568,
"end": 587,
"text": "(Vens et al., 2008)",
"ref_id": "BIBREF33"
},
{
"start": 921,
"end": 944,
"text": "(Partalas et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "Evaluation Metrics. We use standard metrics (Johnson and Zhang, 2014; Meng et al., 2018; Peng et al., 2018) for HTC, including Micro-F1, Macro-F1, and Example-based F1 (EBF) (Partalas et al., 2015; Peng et al., 2016). Let T P i , F P i , F N i denote the true positive, false positive, and false negative for the i-th example x i in object set X, respectively. EBF calculates the F1 scores of all the examples independently and averages them.",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "(Johnson and Zhang, 2014;",
"ref_id": "BIBREF11"
},
{
"start": 70,
"end": 88,
"text": "Meng et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 89,
"end": 107,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "P x i = T P i T P i +F P i , R x i = T P i T P i +F N i , F1 x i = 2P x i \u00d7R x i P x i +R x i , and EBF = 1 N N i=1 F1 x i . Re- call that F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "x i is used as the reward in HiLAP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "Base Models for Feature Encoding. Different from most of existing global HTC methods that rely on pre-specified features (Gopal and Yang, 2013) as input or build on specific models (Cai and Hofmann, 2004; Vens et al., 2008; Silla Jr and Freitas, 2009) , our framework is trained in an end-to-end manner by leveraging a differentiable feature representation learning model as the base model. Specifically, we use TextCNN (Kim, 2014) , HAN (Yang et al., 2016) , bow-CNN (Johnson and Zhang, 2014) on the three textual datasets, and a feed-forward network on the two nontextual datasets. The details of the base models are provided in Appendix C due to limited space.",
"cite_spans": [
{
"start": 121,
"end": 143,
"text": "(Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 181,
"end": 204,
"text": "(Cai and Hofmann, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 205,
"end": 223,
"text": "Vens et al., 2008;",
"ref_id": "BIBREF33"
},
{
"start": 224,
"end": 251,
"text": "Silla Jr and Freitas, 2009)",
"ref_id": "BIBREF30"
},
{
"start": 420,
"end": 431,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
},
{
"start": 438,
"end": 457,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "To incorporate one base model into our framework, we remove its final feed-forward layer that projects the object representation e d to a flat probability distribution of all labels (p Flat ), and use e d directly as the input of HiLAP. As one will see in the later experiments, HiLAP consistently improves the base model by modeling the label hierarchy in an effective manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1"
},
{
"text": "1. Traditional HTC Methods. A major line of work for HTC is Support Vector Machines (SVM) and its hierarchical variants. Specifically, SVM performs standard multi-label classification using one-vs-the-rest (OvR) strategy. Leaf-SVM treats each leaf node as a label and adds the ancestors of predicted leaf nodes. Variants such as HSVM (Tsochantaridis et al., 2005) , Top-Down SVM (TD-SVM) (Liu et al., 2005) , and Hierarchically Regularized SVM (HR-SVM) (Gopal and Yang, 2013) are also tested. Other state-of-theart HTC methods that we compare with include Clus-HMC (Vens et al., 2008) and CSSA (Bi and Kwok, 2011) .",
"cite_spans": [
{
"start": 334,
"end": 363,
"text": "(Tsochantaridis et al., 2005)",
"ref_id": "BIBREF32"
},
{
"start": 388,
"end": 406,
"text": "(Liu et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 453,
"end": 475,
"text": "(Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 565,
"end": 584,
"text": "(Vens et al., 2008)",
"ref_id": "BIBREF33"
},
{
"start": 594,
"end": 613,
"text": "(Bi and Kwok, 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "4.2"
},
{
"text": "2. Neural HTC Methods. There are not many neural methods that specifically target HTC. We mainly compare with two latest neural models: HR-DGCNN (Peng et al., 2018) , which extends hierarchical regularization (Gopal and Yang, 2013) to Graph-CNN and compares favorably to flat models like RCNN (Lai et al., 2015) and XML-CNN (Liu et al., 2017) , and HMCN (Wehrmann et al., 2018) , which outperforms state-of-the-art HTC methods such as HMC-LMLP (Cerri et al., 2016) . We also compare with the base models that we use for feature encoding. The main aim is to see how much gain they could obtain by combining each one of them with HiLAP.",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Peng et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 209,
"end": 231,
"text": "(Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 293,
"end": 305,
"text": "(Lai et al.,",
"ref_id": null
},
{
"start": 324,
"end": 342,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 354,
"end": 377,
"text": "(Wehrmann et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 444,
"end": 464,
"text": "(Cerri et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compared Methods",
"sec_num": "4.2"
},
{
"text": "For datasets without held-out set, we randomly sample 10% from the training set as the validation set following Johnson and Zhang (2014) ; Peng et al. (2018) . We only use the first 256 tokens of each document for representation learning. All the models are trained using an Adam optimizer with initial learning rate 1e-3 and weight decay 1e-6. We use GloVe (Pennington et al., 2014) with size 50 as word embeddings for TextCNN (Kim, 2014) and HAN (Yang et al., 2016) . We create a vocabulary of the most frequent 30,000 words in the training data and generate multi-hot vectors as the input of bow-CNN (Johnson and Zhang, 2014) . For our framework, since the parameter updates are performed after T steps, we cache the object representation e d and reuse it at each step for better efficiency. More details are provided in Appendix D for reproducibility.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "Johnson and Zhang (2014)",
"ref_id": "BIBREF11"
},
{
"start": 139,
"end": 157,
"text": "Peng et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 358,
"end": 383,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 428,
"end": 439,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
},
{
"start": 448,
"end": 467,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF38"
},
{
"start": 603,
"end": 628,
"text": "(Johnson and Zhang, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "1. Comparison with State-of-the-art Methods. We compare the performance of HiLAP to state-of-the-art HTC methods and show the results in Tables 2 and 3 . On RCV1, Hi-LAP (HAN) achieves similar performance to HR-DGCNN even though the corresponding base model HAN is originally worse than HR-DGCNN. HiLAP (TextCNN) outperforms most baselines in Macro-F1 and perform similarly to TD-SVM despite that it uses one global classifier while TD- Figure 4 : Performance comparison of different classification frameworks using the same base models. We compare HiLAP with its flat, supervised variants, and HMCN. Results show that HiLAP exhibits consistent improvement over flat classifiers and larger gains than HMCN.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 151,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
},
{
"start": 437,
"end": 445,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.4"
},
{
"text": "Micro-F1 Macro-F1 Samples-F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.4"
},
{
"text": "SVM uses a set of classifiers. Among all compared methods, HiLAP (bow-CNN) achieves the best performance on all the three metrics. 4 On NYT, similar results are observed: TextCNN and HAN are both improved when combining with HiLAP and HiLAP (bow-CNN) again achieves the best performance. On Yelp, HiLAP (HAN) achieves the best Micro-F1 and EBF, while HiLAP (bow-CNN) obtains the highest Macro-F1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.4"
},
{
"text": "2. Comparison using Same Base Models. We compare the performance of different frameworks that support the use of exactly the same base models and summarize the results in Fig. 4 . 5 Due to the extreme imbalance of the data, directly applying a flat model may suffer from low Macro-F1, i.e., the predictions of flat models are inevitably biased to the most popular labels. HMCN also has the same issue, resulting in Macro-F1 lower than 10 when combining with some base models. In contrast, HiLAP outperforms the baselines significantly in Macro-F1, which implies that our method is bet- Table 4 : Performance comparison on Functional Catalogue and Gene Ontology. We compare with state-of-the-art hierarchical classification methods that take exactly the same raw features as input (i.e., we exclude models designed specifically for text objects).",
"cite_spans": [
{
"start": 180,
"end": 181,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 171,
"end": 177,
"text": "Fig. 4",
"ref_id": null
},
{
"start": 586,
"end": 593,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "4.4"
},
{
"text": "FunCat GO Kwok, 2011), CLUS-HMC (Vens et al., 2008) , and HMCN (Wehrmann et al., 2018) on the Fun-Cat and GO datasets, as they represent the stateof-the-art on these datasets. An SVM classifier is also evaluated to better understand the difficulties of the task. We use the same raw features as the input of all the methods for apples-to-apples comparison and list the results in Table 4 . Note that the metric area under the average precision-recall curve (AUPRC) (Wehrmann et al., 2018) is not applicable because HiLAP does not use a flat probability distribution of all the labels. As one can see, HiLAP outperforms all the baselines on both datasets by a large margin. In particular, we observe significant improvement on Macro-F1 over the best baseline (47.9% and 53.9%, respectively), which shows that our method is especially better at classifying sparse labels than previous approaches.",
"cite_spans": [
{
"start": 32,
"end": 51,
"text": "(Vens et al., 2008)",
"ref_id": "BIBREF33"
},
{
"start": 63,
"end": 86,
"text": "(Wehrmann et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 465,
"end": 488,
"text": "(Wehrmann et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 380,
"end": 387,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "1. Ablation Study on Different Framework Components. We show the ablation analysis of HiLAP in Table 5 . Using Flat-Only degenerates HiLAP to the flat baseline. By comparing the results of Flat-Only and HiLAP-SL-NoFlat (a variant of HiLAP-SL without flat loss), we further confirm that flat approaches are likely to neglect sparse labels, which results in low Macro-F1. Local approaches (HiLAP-SL-NoFlat), on the other hand, are slightly worse in terms of Micro-F1 and EBF but significantly better on Macro-F1. By combining flat and local information, HiLAP-SL achieves performance close to Flat-Only on Micro-F1 and EBF, and even higher Macro-F1 than HiLAP-SL-NoFlat. HiLAP-NoSL is initialized by the pre-trained HiLAP-SL model without mixing the supervised loss during its training. We can see that using the reinforced loss alone still improves the performance on all the three metrics. After removing the flat loss during the training of HiLAP, HiLAP-NoFlat shows slightly lower performance than the full HiLAP model, indicating that the flat component serves as a regularization of the base model and is beneficial to the overall performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Performance Analysis",
"sec_num": "4.5"
},
{
"text": "and Popularity. We analyze the sources of performance gains by dividing the labels based on their levels and number of supporting examples. Fig. 5 shows the absolute Macro-F1 differences between several methods and the base model. We observe similar results for other setups and omit them for a clearer view. As depicted in Fig. 5 , HiLAP and HiLAP-SL are especially beneficial to unpopular labels (P3) at the bottom levels (L3).",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 146,
"text": "Fig. 5",
"ref_id": null
},
{
"start": 324,
"end": 330,
"text": "Fig. 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Study on Label Granularity",
"sec_num": "2."
},
{
"text": "Level-based Macro-F1 Gains Figure 5 : Performance Study on Label Granularity and Popularity. We compute level-based and popularity-based Macro-F1 gains on NYT with bow-CNN as base model. We denote the levels of the hierarchy with L1, L2, and L3 (left) and divide the labels into three equal sized categories (P1, P2, and P3) in a descending order by their number of examples (right).",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "L1 L2 L3",
"sec_num": null
},
{
"text": "3. Analysis of Label Inconsistency. Label inconsistencies often happen in approaches that perform flat inference, but they are not measured by standard evaluation metrics like F1 scores. To provide a picture of how severe the issue is, we further conduct experiments to check the percentage of objects that are predicted with inconsistent labels (Table 6 ). We found, for example, 29,186/781,265 (3.74%) predictions of TextCNN have inconsistent on RCV1. In contrast, HiLAP ensures 0% label inconsistency without the need of post-processing, because its predictions are always valid sub-trees of the label hierarchy (refer to Fig. 2 ). Hierarchical classification approaches have been developed for many applications. For text classification, both traditional methods (Lewis et al., 2004; Gopal and Yang, 2013) and neural methods (Johnson and Zhang, 2014; Peng et al., 2018) have been proposed to classify, e.g., the topics of newswire and web content (Sun and Lim, 2001) or categories of laws and patents (Bi and Kwok, 2015; Cai and Hofmann, 2004; Rousu et al., 2005) . Many previous studies (Liu et al., 2005; Sun and Lim, 2001 ) train a set of local classifiers and make predictions in a top-down manner. In particular, Bi and Kwok (2015) develop Bayesoptimal predictions that minimize the global risks but their model is still locally trained. Such local approaches are not popularly used among recent neural-based HTC models (Johnson and Zhang, 2014; Peng et al., 2018) since it is usually infeasible to train many neural classifiers locally. Global methods, on the other hand, train only one classifier. Although global methods are desirable, they are relatively less studied due to the complexity of the problem. Existing global models are generally modified based on specific flat models. Hierarchical-SVM (Cai and Hofmann, 2004; Qiu et al., 2009) generalizes Support Vector Machine (SVM) learning based on discriminant functions that are structured in a way that mirrors the label hierarchy. One limitation is that Hierarchical-SVM only supports balanced tree (all possible labels are presumed to be at the same level in their experiments). Hierarchical naive Bayes (Silla Jr and Freitas, 2009) modifies naive Bayes by updating weights of one's ancestors as well whenever one label's weights are updated. There are other global methods that are based on association rules (Wang et al., 2001) , C4.5 (Clare and King, 2003) , kernel machines (Rousu et al., 2005) , and decision tree (Vens et al., 2008) . Constraints such as the regularization that enforces the parameters of one node and its parent to be similar (Gopal and Yang, 2013) are also proposed to leverage the label hierarchy while maintaining scalability. However, their use of the label hierarchies is somewhat limited compared with HiLAP.",
"cite_spans": [
{
"start": 767,
"end": 787,
"text": "(Lewis et al., 2004;",
"ref_id": "BIBREF15"
},
{
"start": 788,
"end": 809,
"text": "Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 829,
"end": 854,
"text": "(Johnson and Zhang, 2014;",
"ref_id": "BIBREF11"
},
{
"start": 855,
"end": 873,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 951,
"end": 970,
"text": "(Sun and Lim, 2001)",
"ref_id": "BIBREF31"
},
{
"start": 1005,
"end": 1024,
"text": "(Bi and Kwok, 2015;",
"ref_id": "BIBREF2"
},
{
"start": 1025,
"end": 1047,
"text": "Cai and Hofmann, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 1048,
"end": 1067,
"text": "Rousu et al., 2005)",
"ref_id": "BIBREF27"
},
{
"start": 1092,
"end": 1110,
"text": "(Liu et al., 2005;",
"ref_id": "BIBREF17"
},
{
"start": 1111,
"end": 1128,
"text": "Sun and Lim, 2001",
"ref_id": "BIBREF31"
},
{
"start": 1222,
"end": 1240,
"text": "Bi and Kwok (2015)",
"ref_id": "BIBREF2"
},
{
"start": 1429,
"end": 1454,
"text": "(Johnson and Zhang, 2014;",
"ref_id": "BIBREF11"
},
{
"start": 1455,
"end": 1473,
"text": "Peng et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1813,
"end": 1836,
"text": "(Cai and Hofmann, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 1837,
"end": 1854,
"text": "Qiu et al., 2009)",
"ref_id": "BIBREF24"
},
{
"start": 2380,
"end": 2399,
"text": "(Wang et al., 2001)",
"ref_id": "BIBREF34"
},
{
"start": 2407,
"end": 2429,
"text": "(Clare and King, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 2448,
"end": 2468,
"text": "(Rousu et al., 2005)",
"ref_id": "BIBREF27"
},
{
"start": 2489,
"end": 2508,
"text": "(Vens et al., 2008)",
"ref_id": "BIBREF33"
},
{
"start": 2620,
"end": 2642,
"text": "(Gopal and Yang, 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "(Table 6",
"ref_id": "TABREF7"
},
{
"start": 625,
"end": 631,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "L1 L2 L3",
"sec_num": null
},
{
"text": "We proposed an end-to-end reinforcement learning approach to hierarchical text classification (HTC) where objects are labeled by placing them at the proper positions in the label hierarchy. The proposed framework makes consistent and inter-dependent predictions, in which any neuralbased representation learning model can be used as a base model and a label assignment policy is learned to determine where to place the objects and when to stop the assignment process. Experiments on five public datasets and four base models showed that our approach outperforms stateof-the-art HTC methods significantly. For future work, we will explore the effectiveness of the proposed framework on other base models and forms of data (e.g., images). We will introduce more losses covering other aspects in the objective function to further improve the performance of our framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We use \"example\" and \"object\" interchangeably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.yelp.com/dataset/ challenge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results are not comparable withJohnson and Zhang (2014) due to implementation details and the fact that they tune the threshold for each label using k-fold crossvalidation. See Appendix B for more discussions.5 For HMCN, we replace its static features with the same base model for fair comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Research was sponsored in part by U.S. Army Research Lab under Cooperative Agreement No. W911NF-09-2-0053 (NSCTA), DARPA under Agreement No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, grant 1U54GM-114838 awarded by NIGMS, National Science Foundation SMA 18-29268, DARPA MCS and GAILA, IARPA BETTER, Schmidt Family Foundation, Amazon Faculty Award, Google Research Award, Snapchat Gift, and JP Morgan AI Research Award. We thank Chao Zhang, Xiao-Yang Liu, Qingrong Chen, Jun Yan, collaborators in the INK research lab, and anonymous reviewers for their help and valuable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages",
"authors": [
{
"first": "Rahul",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Archit",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Yashoteja",
"middle": [],
"last": "Prabhu",
"suffix": ""
},
{
"first": "Manik",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2013,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahul Agrawal, Archit Gupta, Yashoteja Prabhu, and Manik Varma. 2013. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In WWW, pages 13-24. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bayes-optimal hierarchical multilabel classification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Jame",
"middle": [
"T"
],
"last": "Kwok",
"suffix": ""
}
],
"year": 2015,
"venue": "TKDE",
"volume": "27",
"issue": "11",
"pages": "2907--2918",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Bi and Jame T Kwok. 2015. Bayes-optimal hierarchical multilabel classification. TKDE, 27(11):2907-2918.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-label classification on tree-and dag-structured hierarchies",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwok",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML-11",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Bi and James T Kwok. 2011. Multi-label classi- fication on tree-and dag-structured hierarchies. In ICML-11, pages 17-24.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Hierarchical document categorization with support vector machines",
"authors": [
{
"first": "Lijuan",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2004,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lijuan Cai and Thomas Hofmann. 2004. Hierarchi- cal document categorization with support vector ma- chines. In CIKM, pages 78-87. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep communicating agents for abstractive summarization",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "1662--1675",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In NAACL, pages 1662- 1675.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reduction strategies for hierarchical multi-label classification in protein function prediction",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Cerri",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rodrigo",
"suffix": ""
},
{
"first": "Andr\u00e9 Cplf De",
"middle": [],
"last": "Barros",
"suffix": ""
},
{
"first": "Yaochu",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "BMC bioinformatics",
"volume": "17",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Cerri, Rodrigo C Barros, Andr\u00e9 CPLF de Car- valho, and Yaochu Jin. 2016. Reduction strate- gies for hierarchical multi-label classification in protein function prediction. BMC bioinformatics, 17(1):373.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hierarchical classification: combining bayes with svm",
"authors": [
{
"first": "Nicol\u00f2",
"middle": [],
"last": "Cesa-Bianchi",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "Gentile",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Zaniboni",
"suffix": ""
}
],
"year": 2006,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicol\u00f2 Cesa-Bianchi, Claudio Gentile, and Luca Zani- boni. 2006. Hierarchical classification: combining bayes with svm. In ICML, pages 177-184. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting gene function in saccharomyces cerevisiae",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Clare",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2003,
"venue": "Bioinformatics",
"volume": "19",
"issue": "2",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Clare and Ross D King. 2003. Predicting gene function in saccharomyces cerevisiae. Bioinformat- ics, 19(suppl 2):ii42-ii49.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recursive regularization for large-scale classification with hierarchical and graphical dependencies",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Gopal",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2013,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "257--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Gopal and Yiming Yang. 2013. Recursive regularization for large-scale classification with hi- erarchical and graphical dependencies. In KDD, pages 257-265. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Gotrees: predicting go associations from protein domain composition using decision trees",
"authors": [
{
"first": "Boris",
"middle": [],
"last": "Hayete",
"suffix": ""
},
{
"first": "Jadwiga",
"middle": [
"R"
],
"last": "Bienkowska",
"suffix": ""
}
],
"year": 2005,
"venue": "Biocomputing 2005",
"volume": "",
"issue": "",
"pages": "127--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boris Hayete and Jadwiga R Bienkowska. 2005. Gotrees: predicting go associations from protein do- main composition using decision trees. In Biocom- puting 2005, pages 127-138. World Scientific.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Effective use of word order for text categorization with convolutional neural networks",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.1058"
]
},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2014. Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hierarchically classifying documents using very few words",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Mehran",
"middle": [],
"last": "Sahami",
"suffix": ""
}
],
"year": 1997,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "170--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Koller and Mehran Sahami. 1997. Hierarchi- cally classifying documents using very few words. In ICML, pages 170-178. Morgan Kaufmann Pub- lishers Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent convolutional neural networks for text classification",
"authors": [
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Liheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "AAAI",
"volume": "333",
"issue": "",
"pages": "2267--2273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In AAAI, volume 333, pages 2267- 2273.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rcv1: A new benchmark collection for text categorization research",
"authors": [
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Tony",
"middle": [
"G"
],
"last": "Yang",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of machine learning research",
"volume": "5",
"issue": "",
"pages": "361--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361-397.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep learning for extreme multi-label text classification",
"authors": [
{
"first": "Jingzhou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yuexin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Re- search and Development in Information Retrieval, pages 115-124. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Support vector machines classification with a very large-scale taxonomy",
"authors": [
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Hua-Jun",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2005,
"venue": "Acm Sigkdd Explorations Newsletter",
"volume": "7",
"issue": "1",
"pages": "36--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. 2005. Support vec- tor machines classification with a very large-scale taxonomy. Acm Sigkdd Explorations Newsletter, 7(1):36-43.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "End-to-end reinforcement learning for automatic taxonomy induction",
"authors": [
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiaotao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "2462--2472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuning Mao, Xiang Ren, Jiaming Shen, Xiaotao Gu, and Jiawei Han. 2018. End-to-end reinforcement learning for automatic taxonomy induction. In ACL, pages 2462-2472. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Weakly-supervised neural text classification",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2018. Weakly-supervised neural text classification. In CIKM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "LSHTC: A benchmark for large-scale text classification",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Partalas",
"suffix": ""
},
{
"first": "Aris",
"middle": [],
"last": "Kosmopoulos",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Baskiotis",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Arti\u00e8res",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Paliouras",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Massih-Reza",
"middle": [],
"last": "Amini",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Gallinari",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry Arti\u00e8res, George Paliouras,\u00c9ric Gaussier, Ion Androutsopoulos, Massih-Reza Amini, and Patrick Gallinari. 2015. LSHTC: A bench- mark for large-scale text classification. CoRR, abs/1503.08581.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Large-scale hierarchical text classification with recursively regularized deep graph-cnn",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yaopeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mengjiao",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "1063--1072",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In WWW, pages 1063-1072.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deepmesh: deep semantic representation for improving large-scale mesh indexing",
"authors": [
{
"first": "Shengwen",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ronghui",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hongning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Mamitsuka",
"suffix": ""
},
{
"first": "Shanfeng",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "12",
"pages": "70--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shengwen Peng, Ronghui You, Hongning Wang, Chengxiang Zhai, Hiroshi Mamitsuka, and Shan- feng Zhu. 2016. Deepmesh: deep semantic repre- sentation for improving large-scale mesh indexing. Bioinformatics, 32(12):i70-i79.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Hierarchical multi-class text categorization with global margin maximization",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Wenjun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "165--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xipeng Qiu, Wenjun Gao, and Xuanjing Huang. 2009. Hierarchical multi-class text categorization with global margin maximization. In acl-ijcnlp 2009, pages 165-168.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An evaluation of classification models for question topic categorization",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Gao",
"middle": [],
"last": "Cong",
"suffix": ""
},
{
"first": "Cuiping",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2012,
"venue": "JASIST",
"volume": "63",
"issue": "",
"pages": "889--903",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Qu, Gao Cong, Cuiping Li, Aixin Sun, and Hong Chen. 2012. An evaluation of classification models for question topic categorization. JASIST, 63:889- 903.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Self-critical sequence training for image captioning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Youssef",
"middle": [],
"last": "Marcheret",
"suffix": ""
},
{
"first": "Jarret",
"middle": [],
"last": "Mroueh",
"suffix": ""
},
{
"first": "Vaibhava",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goel",
"suffix": ""
}
],
"year": 2017,
"venue": "CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR, page 3.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning hierarchical multi-category text classification models",
"authors": [
{
"first": "Juho",
"middle": [],
"last": "Rousu",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "Sandor",
"middle": [],
"last": "Szedmak",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2005,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "744--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juho Rousu, Craig Saunders, Sandor Szedmak, and John Shawe-Taylor. 2005. Learning hierarchical multi-category text classification models. In ICML, pages 744-751. ACM.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The new york times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery",
"authors": [
{
"first": "N",
"middle": [],
"last": "Carlos",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"A"
],
"last": "Silla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "22",
"issue": "",
"pages": "31--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos N Silla and Alex A Freitas. 2011. A survey of hierarchical classification across different appli- cation domains. Data Mining and Knowledge Dis- covery, 22(1-2):31-72.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A globalmodel naive bayes approach to the hierarchical prediction of protein functions",
"authors": [
{
"first": "Alex",
"middle": [
"A"
],
"last": "Carlos N Silla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freitas",
"suffix": ""
}
],
"year": 2009,
"venue": "ICDM'09",
"volume": "",
"issue": "",
"pages": "992--997",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos N Silla Jr and Alex A Freitas. 2009. A global- model naive bayes approach to the hierarchical pre- diction of protein functions. In ICDM'09, pages 992-997. IEEE.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hierarchical text classification and evaluation",
"authors": [
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "ICDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aixin Sun and Ee-Peng Lim. 2001. Hierarchical text classification and evaluation. In ICDM.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Large margin methods for structured and interdependent output variables",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "Yasemin",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of machine learning research",
"volume": "6",
"issue": "",
"pages": "1453--1484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large mar- gin methods for structured and interdependent out- put variables. Journal of machine learning research, 6(Sep):1453-1484.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Decision trees for hierarchical multi-label classification. Machine Learning",
"authors": [
{
"first": "Celine",
"middle": [],
"last": "Vens",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Struyf",
"suffix": ""
},
{
"first": "Leander",
"middle": [],
"last": "Schietgat",
"suffix": ""
},
{
"first": "Sa\u0161o",
"middle": [],
"last": "D\u017eeroski",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Blockeel",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "73",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Celine Vens, Jan Struyf, Leander Schietgat, Sa\u0161o D\u017eeroski, and Hendrik Blockeel. 2008. Decision trees for hierarchical multi-label classification. Ma- chine Learning, 73(2):185.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hierarchical classification of real life documents",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Senqiang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2001,
"venue": "SDM",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Wang, Senqiang Zhou, and Yu He. 2001. Hierar- chical classification of real life documents. In SDM, pages 1-16. SIAM.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hierarchical multi-label classification networks",
"authors": [
{
"first": "Jonatas",
"middle": [],
"last": "Wehrmann",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Cerri",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Barros",
"suffix": ""
}
],
"year": 2018,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "5225--5234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonatas Wehrmann, Ricardo Cerri, and Rodrigo Barros. 2018. Hierarchical multi-label classification net- works. In ICML, pages 5225-5234.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine learning",
"volume": "8",
"issue": "3-4",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "International patent classification (ipc)",
"authors": [
{
"first": "",
"middle": [],
"last": "Ipc Wipo",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IPC WIPO. 2014. International patent classification (ipc). World Intellectual Property Organization, Geneve.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL, pages 1480-1489.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "train a set of local classifiers that function independently and predictions are usually made in a top-down order: one node is visited if and only if its ancestors have 446",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF1": {
"text": "Statistics of the datasets. |L| denotes the number of labels in the label hierarchy. Avg(|L i |) and Max(|L i |) denote the average and maximum number of labels of one object, respectively.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Dataset Hierarchy</td><td>|L|</td><td colspan=\"4\">Avg(|Li|) Max(|Li|) Training Validation</td><td>Test</td></tr><tr><td>RCV1</td><td>Tree</td><td>103</td><td>3.24</td><td>17</td><td>23,149</td><td>2,315</td><td>781,265</td></tr><tr><td>NYT</td><td>Tree</td><td>115</td><td>2.52</td><td>14</td><td>25,279</td><td>2,528</td><td>10,828</td></tr><tr><td>Yelp</td><td>DAG</td><td>539</td><td>3.77</td><td>32</td><td>87,375</td><td>8,737</td><td>37,265</td></tr><tr><td>FunCat</td><td>Tree</td><td>499</td><td>8.76</td><td>45</td><td>1,628</td><td>848</td><td>1,281</td></tr><tr><td>GO</td><td>DAG</td><td>4,125</td><td>34.9</td><td>141</td><td>1,625</td><td>848</td><td>1,278</td></tr></table>"
},
"TABREF2": {
"text": "Performance comparison on RCV1. * denotes the results reported inPeng et al. (2018) on the same dataset split. Note that the results of HR-SVM reported inGopal and Yang (2013) are not comparable as they use a different hierarchy with 137 labels.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td>Method</td><td colspan=\"3\">Micro-F1 Macro-F1 EBF</td></tr><tr><td/><td>Leaf-SVM *</td><td>69.1</td><td>33.0</td><td>-</td></tr><tr><td>Flat</td><td>SVM TextCNN</td><td>80.4 76.6</td><td>46.2 43.0</td><td>80.5 75.8</td></tr><tr><td/><td>HAN</td><td>75.3</td><td>40.6</td><td>76.1</td></tr><tr><td/><td>bow-CNN</td><td>82.7</td><td>44.7</td><td>83.3</td></tr><tr><td/><td>TD-SVM</td><td>80.1</td><td>50.7</td><td>80.5</td></tr><tr><td>Local &amp; Global</td><td>HSVM * HR-SVM * HR-DGCNN * HMCN HiLAP (TextCNN) HiLAP (HAN)</td><td>69.3 72.8 76.1 80.8 78.6 75.4</td><td>33.3 38.6 43.2 54.6 50.5 45.5</td><td>---82.2 80.1 77.4</td></tr><tr><td/><td>HiLAP (bow-CNN)</td><td>83.3</td><td>60.1</td><td>85.0</td></tr></table>"
},
"TABREF3": {
"text": "Performance comparison on the NYT and Yelp datasets. We mainly compare with competitive baselines that perform well on RCV1.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Method</td><td/><td>NYT</td><td/><td/><td>Yelp</td><td/></tr><tr><td/><td colspan=\"6\">Micro-F1 Macro-F1 EBF Micro-F1 Macro-F1 EBF</td></tr><tr><td>SVM</td><td>72.4</td><td>37.1</td><td>74.0</td><td>66.9</td><td>36.3</td><td>68.0</td></tr><tr><td>TextCNN</td><td>69.5</td><td>39.5</td><td>71.6</td><td>62.8</td><td>27.3</td><td>63.1</td></tr><tr><td>HAN</td><td>62.8</td><td>22.8</td><td>65.5</td><td>66.7</td><td>29.0</td><td>67.9</td></tr><tr><td>bow-CNN</td><td>72.9</td><td>33.4</td><td>74.1</td><td>63.6</td><td>23.9</td><td>63.9</td></tr><tr><td>TD-SVM</td><td>73.7</td><td>43.7</td><td>75.0</td><td>67.2</td><td>40.5</td><td>67.8</td></tr><tr><td>HMCN</td><td>72.2</td><td>47.4</td><td>74.2</td><td>66.4</td><td>42.7</td><td>67.6</td></tr><tr><td>HiLAP (TextCNN)</td><td>69.9</td><td>43.2</td><td>72.8</td><td>65.5</td><td>37.3</td><td>68.4</td></tr><tr><td>HiLAP (HAN)</td><td>65.2</td><td>28.7</td><td>68.0</td><td>69.7</td><td>38.1</td><td>72.4</td></tr><tr><td>HiLAP (bow-CNN)</td><td>74.6</td><td>51.6</td><td>76.6</td><td>68.9</td><td>42.8</td><td>71.5</td></tr></table>"
},
"TABREF6": {
"text": "Ablation study of HiLAP. We evaluate variants of HiLAP using bow-CNN(Johnson and Zhang, 2014) onRCV1 (Lewis et al., 2004).",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Method</td><td colspan=\"3\">Micro-F1 Macro-F1 EBF</td></tr><tr><td>Flat-Only</td><td>82.7</td><td>44.7</td><td>83.3</td></tr><tr><td>HiLAP-SL-NoFlat</td><td>81.0</td><td>52.1</td><td>81.7</td></tr><tr><td>HiLAP-SL</td><td>82.5</td><td>55.3</td><td>83.0</td></tr><tr><td>HiLAP-NoSL</td><td>83.2</td><td>59.3</td><td>85.0</td></tr><tr><td>HiLAP-NoFlat</td><td>83.0</td><td>59.8</td><td>84.7</td></tr><tr><td>HiLAP</td><td>83.3</td><td>60.1</td><td>85.0</td></tr></table>"
},
"TABREF7": {
"text": "Analysis of Label Inconsistency. We compare various methods by the percentage of predictions with inconsistent labels on RCV1(Lewis et al., 2004).",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>SVM</td><td colspan=\"3\">TextCNN HMCN HiLAP</td></tr><tr><td>4.83%</td><td>3.74%</td><td>3.84%</td><td>0%</td></tr></table>"
}
}
}
}