Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:03:39.193089Z"
},
"title": "Transfer Learning Between Related Tasks Using Expected Label Proportions",
"authors": [
{
"first": "Matan",
"middle": [],
"last": "Ben",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"settlement": "Ramat-Gan Israel"
}
},
"email": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"settlement": "Ramat-Gan Israel"
}
},
"email": "yoav.goldberg@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We propose a novel application of the XR framework for transfer learning between related tasks, where knowing the labels of task A provides an estimation of the label proportion of task B. We then use a model trained for A to label a large corpus, and use this corpus with an XR loss to train a model for task B. To make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure. We demonstrate the approach on the task of Aspectbased Sentiment classification, where we effectively use a sentence-level sentiment predictor to train accurate aspect-based predictor. The method improves upon fully supervised neural system trained on aspect-level data, and is also cumulative with LM-based pretraining, as we demonstrate by improving a BERTbased Aspect-based Sentiment model.",
"pdf_parse": {
"paper_id": "D19-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We propose a novel application of the XR framework for transfer learning between related tasks, where knowing the labels of task A provides an estimation of the label proportion of task B. We then use a model trained for A to label a large corpus, and use this corpus with an XR loss to train a model for task B. To make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure. We demonstrate the approach on the task of Aspectbased Sentiment classification, where we effectively use a sentence-level sentiment predictor to train accurate aspect-based predictor. The method improves upon fully supervised neural system trained on aspect-level data, and is also cumulative with LM-based pretraining, as we demonstrate by improving a BERTbased Aspect-based Sentiment model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning (Schapire et al., 2002; Jin and Liu, 2005; Chang et al., 2007; Gra\u00e7a et al., 2007; Quadrianto et al., 2009a; Mann and Mc-Callum, 2010a; Ganchev et al., 2010; Hope and Shahaf, 2016) , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization (Mann and McCallum, 2007) the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) (Mann and McCallum, 2007) , in which the model is trained to fit the conditional probabilities of labels given features.",
"cite_spans": [
{
"start": 364,
"end": 387,
"text": "(Schapire et al., 2002;",
"ref_id": "BIBREF38"
},
{
"start": 388,
"end": 406,
"text": "Jin and Liu, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 407,
"end": 426,
"text": "Chang et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 427,
"end": 446,
"text": "Gra\u00e7a et al., 2007;",
"ref_id": "BIBREF10"
},
{
"start": 447,
"end": 472,
"text": "Quadrianto et al., 2009a;",
"ref_id": "BIBREF35"
},
{
"start": 473,
"end": 499,
"text": "Mann and Mc-Callum, 2010a;",
"ref_id": null
},
{
"start": 500,
"end": 521,
"text": "Ganchev et al., 2010;",
"ref_id": "BIBREF9"
},
{
"start": 522,
"end": 544,
"text": "Hope and Shahaf, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 725,
"end": 750,
"text": "(Mann and McCallum, 2007)",
"ref_id": "BIBREF23"
},
{
"start": 903,
"end": 928,
"text": "(Mann and McCallum, 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec 3.1) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspectlevel sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections 3.1 and 4). In other words, given a medium-sized sentiment corpus with sentencelevel labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section 5.3 we explore these options and show that our XR framework improves the results also when applied on top of a pretrained BERT-based model (Devlin et al., 2018) .",
"cite_spans": [
{
"start": 569,
"end": 590,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section 3.2). Source code is available at https: //github.com/MatanBN/XRTransfer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions (Jin and Liu, 2005; Chang et al., 2007; Musicant et al., 2007; Mann and Mc-Callum, 2007; Quadrianto et al., 2009b; Liang et al., 2009; Ganchev et al., 2010; Mann and Mc-Callum, 2010b; Chang et al., 2012; Wang et al., 2012; Zhu et al., 2014; Hope and Shahaf, 2016) or prior knowledge of features label associations (Schapire et al., 2002; Haghighi and Klein, 2006; Druck et al., 2008; Melville et al., 2009; Mohammady and Culotta, 2015) . In the context of NLP, Haghighi and Klein (2006) suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. Melville et al. (2009) used background lexical information in terms of word-class associations to train a sentiment classifier. Ganchev and Das (2013) ; Wang and Manning (2014) suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.",
"cite_spans": [
{
"start": 359,
"end": 378,
"text": "(Jin and Liu, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 379,
"end": 398,
"text": "Chang et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 399,
"end": 421,
"text": "Musicant et al., 2007;",
"ref_id": "BIBREF28"
},
{
"start": 422,
"end": 447,
"text": "Mann and Mc-Callum, 2007;",
"ref_id": null
},
{
"start": 448,
"end": 473,
"text": "Quadrianto et al., 2009b;",
"ref_id": "BIBREF36"
},
{
"start": 474,
"end": 493,
"text": "Liang et al., 2009;",
"ref_id": "BIBREF19"
},
{
"start": 494,
"end": 515,
"text": "Ganchev et al., 2010;",
"ref_id": "BIBREF9"
},
{
"start": 516,
"end": 542,
"text": "Mann and Mc-Callum, 2010b;",
"ref_id": null
},
{
"start": 543,
"end": 562,
"text": "Chang et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 563,
"end": 581,
"text": "Wang et al., 2012;",
"ref_id": "BIBREF46"
},
{
"start": 582,
"end": 599,
"text": "Zhu et al., 2014;",
"ref_id": "BIBREF50"
},
{
"start": 600,
"end": 622,
"text": "Hope and Shahaf, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 673,
"end": 696,
"text": "(Schapire et al., 2002;",
"ref_id": "BIBREF38"
},
{
"start": 697,
"end": 722,
"text": "Haghighi and Klein, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 723,
"end": 742,
"text": "Druck et al., 2008;",
"ref_id": "BIBREF5"
},
{
"start": 743,
"end": 765,
"text": "Melville et al., 2009;",
"ref_id": "BIBREF26"
},
{
"start": 766,
"end": 794,
"text": "Mohammady and Culotta, 2015)",
"ref_id": "BIBREF27"
},
{
"start": 820,
"end": 845,
"text": "Haghighi and Klein (2006)",
"ref_id": "BIBREF11"
},
{
"start": 1002,
"end": 1024,
"text": "Melville et al. (2009)",
"ref_id": "BIBREF26"
},
{
"start": 1130,
"end": 1152,
"text": "Ganchev and Das (2013)",
"ref_id": "BIBREF8"
},
{
"start": 1155,
"end": 1178,
"text": "Wang and Manning (2014)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and Related Work 2.1 Lightly Supervised Learning",
"sec_num": "2"
},
{
"text": "Expectation Regularization (XR) (Mann and Mc-Callum, 2007 ) is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by Mohammady and Culotta (2015) to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).",
"cite_spans": [
{
"start": 32,
"end": 57,
"text": "(Mann and Mc-Callum, 2007",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Learning Setup The main idea of XR is moving from a fully supervised situation in which each data-point x i has an associated label y i , to a setup in which sets of data points U j are associated with corresponding label proportionsp j over that set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Formally, let X = {x 1 , x 2 , . . . , x n } \u2286 X be a set of data points, Y be a set of |Y| class labels, U = {U 1 , U 2 , . . . , U m } be a set of sets where U j \u2286 X for every j \u2208 {1, 2, . . . , m}, and let p j \u2208 R |Y| be the label distribution of set U j . For example,p j = {.7, .2, .1} would indicate that 70% of data points in U j are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let p \u03b8 (x) be a parameterized function with parameters \u03b8 from X to a vector of conditional probabilities over labels in Y. We write p \u03b8 (y|x) to denote the probability assigned to the yth event (the conditional probability of y given x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "A typically objective when training on fully labeled data of (x i , y i ) pairs is to maximize likelihood of labeled data using the cross entropy loss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "L cross (\u03b8) = \u2212 n i log p \u03b8 (y i |x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Instead, in XR our data comes in the form of pairs (U j ,p j ) of sets and their corresponding expected label proportions, and we aim to optimize \u03b8 to fit the label distributionp j over U j , for all j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "XR Loss As counting the number of predicted class labels over a set U leads to a nondifferentiable objective, Mann and McCallum (2007) suggest to relax it and use instead the model's posterior distributionp j over the set:",
"cite_spans": [
{
"start": 110,
"end": 134,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q j (y) = x\u2208U j p \u03b8 (y|x) (1) p j (y) =q j (y) y q j (y )",
"eq_num": "(2)"
}
],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "where q(y) indicates the yth entry in q. Then, we would like to set \u03b8 such thatp j andp j are close. Mann and McCallum (2007) suggest to use KLdivergence for this. KL-divergence is composed of two parts:",
"cite_spans": [
{
"start": 101,
"end": 125,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "D KL (p j ||p j ) = \u2212p j \u2022 logp j +p j \u2022 logp j = H(p j ,p j ) \u2212 H(p j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Since H(p j ) is constant, we only need to minimize H(p j ,p j ), therefore the loss function becomes: 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L XR (\u03b8) = \u2212 m j=1p j \u2022 logp j",
"eq_num": "(3)"
}
],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Notice that computingq j requires summation over p \u03b8 (x) for the entire set U j , which can be prohibitive. We present batched approximation (Section 3.2) to overcome this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Temperature Parameter Mann and McCallum (2007) find that XR might find a degenerate solution. For example, in a three class classification task, wherep j = {.5, .35, .15}, it might find a solution such thatp \u03b8 (y) = {.5, .35, .15} for every instance, as a result, every instance will be classified the same. To avoid this, Mann and McCallum (2007) suggest to penalize flat distributions by using a temperature coefficient T likewise:",
"cite_spans": [
{
"start": 22,
"end": 46,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF23"
},
{
"start": 323,
"end": 347,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (y|x) = e zW +b k e (zW +b) k 1 T",
"eq_num": "(4)"
}
],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "Where z is a feature vector and W and b are the linear classifier parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectation Regularization (XR)",
"sec_num": "2.2"
},
{
"text": "In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence \"Excellent food, although the interior could use some help.\" has two aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect-based Sentiment Classification",
"sec_num": "2.3"
},
{
"text": "Algorithm 1 Stochastic Batched XR Inputs: A dataset (U 1 , ..., U m ,p 1 , ...,p m ), batch size k, differentiable classifier p \u03b8 (y|x) while not converged do j \u2190 random(1, ..., m) U \u2190 random-choice(U j ,k) q u \u2190 x\u2208U p \u03b8 (x) p u \u2190 normalize(q u ) \u2190 \u2212p j logp u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect-based Sentiment Classification",
"sec_num": "2.3"
},
{
"text": "Compute loss (eq (4)) Compute gradients and update \u03b8 end while return \u03b8 food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence \u03b1 = (w 1 , w 2 , . . . , w n ), may contain 0 or more aspects a i , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (NEG, POS, or NEU). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks (Pontiki et al., 2015 (Pontiki et al., , 2016 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.",
"cite_spans": [
{
"start": 502,
"end": 523,
"text": "(Pontiki et al., 2015",
"ref_id": "BIBREF34"
},
{
"start": 524,
"end": 547,
"text": "(Pontiki et al., , 2016",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect-based Sentiment Classification",
"sec_num": "2.3"
},
{
"text": "While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Aspect-based Sentiment Classification",
"sec_num": "2.3"
},
{
"text": "Consider two classification tasks over a shared input space, a source task s from X to Y s and a target task t from X to Y t , which are related through a conditional distribution P (y t = i|y s = j). In other words, a labeling decision for task s induces an expected label distribution over the task t. For a set of datapoints x 1 , ..., x n that share a source label y s , we expect to see a target label distribution of P (y t |y s ) =p y s . Given a large unlabeled dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "D u = (x u 1 , ..., x u |D u | ), a small labeled dataset for the tar- get task D t = ((x t 1 , y t 1 ), ..., (x t |D t | , y t |D t | ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": ", classifier C s : X \u2192 Y s (or sufficient training data to train one) for the source task, 2 we wish to use C s and D u to train a good classifier C t : X \u2192 Y t for the target task. This can be achieved using the following procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "\u2022 Apply C s to D t , resulting in a noisy sourceside labels\u1ef9 s i = C s (x t i ) for the target task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "\u2022 Estimate the conditional probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "P (y t |\u1ef9 s ) table using MLE estimates over D t p j (y t = i|\u1ef9 s = j) = #(y t = i,\u1ef9 s = j) #(\u1ef9 s = j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "where # is a counting function over D t . 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "\u2022 Apply C s to the unlabeled data D u resulting in labels C s (x u i ). Split D u into |Y s | sets U j according to the labeling induced by C s :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "U j = {x u i | x u i \u2208 D u \u2227 C s (x u i ) = j}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "\u2022 Use Algorithm 1 to train a classifier for the target task using input pairs (U j ,p j ) and the XR loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a targetclass classifier. Mann and McCallum (2007) and following work take the base classifier p \u03b8 (y|x) to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs (Byrd et al., 1995) . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.",
"cite_spans": [
{
"start": 169,
"end": 193,
"text": "Mann and McCallum (2007)",
"ref_id": "BIBREF23"
},
{
"start": 366,
"end": 385,
"text": "(Byrd et al., 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer-training between related tasks with XR",
"sec_num": "3.1"
},
{
"text": "One complicating factor is that the computation ofq j in equation 1requires a summation over p \u03b8 (x) for the entire set U j , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set U j will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match work, etc. In this work, we use a neural classification model. 3 In theory, we could estimate-or even \"guess\"-these |Y s | \u00d7 |Y t | values without using D t at all. In practice, and in particular because we care about the target label proportions given noisy source labels\u1ef9 s assigned by C s , we use MLE estimates over the tagged D t . the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair (U j ,p j ), sample a random subset U of U j of size k, and compute the local XR loss of set U :",
"cite_spans": [
{
"start": 553,
"end": 554,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Batched Training for Deep XR",
"sec_num": "3.2"
},
{
"text": "L XR (\u03b8; j, U ) = \u2212p j \u2022 logp u (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Batched Training for Deep XR",
"sec_num": "3.2"
},
{
"text": "wherep u is computed by summing over the elements of U rather than of U j in equations (1-2). The stochastic batched XR training algorithm is given in Algorithm 1. For large enough k, the expected label distribution of the subset is the same as that of the complete set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Batched Training for Deep XR",
"sec_num": "3.2"
},
{
"text": "We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level 4 sentiment signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application to Aspect-based Sentiment",
"sec_num": "4"
},
{
"text": "We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section 3.1. The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier C s on the aspectlevel corpus D t by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in Figure 1 : Illustration of the algorithm. C s is applied to D u resulting in\u1ef9 for each sentence, U j is built according with the fragments of the same labelled sentences, the probabilities for each fragment in U j are summed and normalized, the XR loss in equation 4is calculated and the network is updated. Figure 2 : Illustration of the decomposition procedure, when given a 1 =\"duck confit\" and a 2 = \"foie gras terrine with figs\" as the pivot phrases. fragment-level labeling. Similarly, when we apply C s to the unlabeled data D u we again do it at the sentence level, but the sets U j are composed of fragments, not sentences:",
"cite_spans": [],
"ref_spans": [
{
"start": 1268,
"end": 1276,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1576,
"end": 1584,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relating the classification tasks",
"sec_num": "4.1"
},
{
"text": "U j = {f \u03b1 i | \u03b1 \u2208 D u \u2227f \u03b1 i \u2208 frags(\u03b1)\u2227C s (\u03b1) = j}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relating the classification tasks",
"sec_num": "4.1"
},
{
"text": "We then apply algorithm 1 as is: at each step of training we sample a source label j \u2208 {POS,NEG,NEU}, sample k fragments from U j , and use the XR loss to fit the expected fragmentlabel proportions over these k fragments top j . Figure 1 illustrates the procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relating the classification tasks",
"sec_num": "4.1"
},
{
"text": "We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiL-STM based neural architecture for both sentence classification and fragment classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Architecture",
"sec_num": "4.2"
},
{
"text": "We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivotphrase a, where pivot phrase (w 1 , w 2 , ...w l ) is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence \u03b1, a set of pivot phrases A = (a 1 , ..., a m ) and a specific pivot phrase a i , we consult the constituency parse tree of \u03b1 and look for tree nodes that satisfy the following conditions: 5 1. The node governs the desired pivot phrase a i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fragment-decomposition",
"sec_num": null
},
{
"text": "VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any a j \u2208 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The node governs either a verb (VB, VBD,",
"sec_num": "2."
},
{
"text": "3. The node governs a minimal number of pivot phrases from (a 1 , ..., a m ), ideally only a i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The node governs either a verb (VB, VBD,",
"sec_num": "2."
},
{
"text": "We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with as-pect a i . 6 The decomposition procedure is demonstrated in Figure 2 .",
"cite_spans": [
{
"start": 161,
"end": 162,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 210,
"end": 218,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The node governs either a verb (VB, VBD,",
"sec_num": "2."
},
{
"text": "When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivotphrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The node governs either a verb (VB, VBD,",
"sec_num": "2."
},
{
"text": "Neural Classifier Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The node governs either a verb (VB, VBD,",
"sec_num": "2."
},
{
"text": "Data Our target task is aspect-based fragmentclassification, with small labeled datasets from the SemEval 2015 and 2016 shared tasks, each dataset containing aspect-level predictions for about 2000 sentences in the restaurants reviews domain. Our source classifier is based on training on up to 10,000 sentences from the same domain and 2000 sentences for validation, labeled for only for sentence-level sentiment. We additionally have an unlabeled dataset of up to 670,000 sentences from the same domain 7 . We tokenized all datasets using the Tweet Tokenizer from NLTK package 8 and parsed the tokenized sentences with AllenNLP parser. 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Training Details Both the sentence level classification models and the models trained with XR have a hidden state vector dimension of size 300, they use dropout (Hinton et al., 2012) on the sentence representation or fragment representation vector (rate=0.5) and optimized using Adam (Kingma and Ba, 2014). The sentence classification is trained with a batch size of 30 and XR models are trained with batch sizes k that each contain 450 fragments 10 . We used a temperature param- 6 On the rare occasions where we cannot find such a node, we take the root node of the tree (the entire sentence) as the fragment for the given aspect.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 481,
"end": 482,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "7 All of the sentence-level sentiment data is obtained from the Yelp dataset challenge: https://www.yelp.com/ dataset/challenge 8 https://www.nltk.org/ 9 https://allennlp.org/ 10 We also increased the batch sizes of the baselines to match those of the XR setups. This decreased the performance of the baselines, which is consistent with the folk knowledge in the community according to which smaller batch sizes are more effective overall. eter of 1 11 . We use pre-trained 300-dimensional GloVe embeddings 12 (Pennington et al., 2014) , and fine-tune them during training. The XR training was validated with a validation set of 20% of SemEval-2015 training set, the sentence level BiL-STM classifiers were validated with a validation of 2000 sentences. 13 When fine-tuning to the aspect based task we used 20% of train in each dataset as validation and evaluated on this set. On each training method the models were evaluated on the validation set, after each epoch and the best model was chosen. The data is highly imbalanced, with only very few sentences receiving a NEU label. We do not deal with this imbalance directly and train both the sentence level and the XR aspect-based training on the imbalanced data. However, when training C s , we trained five models and chose the best model that predicts correctly at least 20% of the neutral sentences. The models are implemented using DyNet 14 (Neubig et al., 2017) .",
"cite_spans": [
{
"start": 510,
"end": 535,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 754,
"end": 756,
"text": "13",
"ref_id": null
},
{
"start": 1398,
"end": 1419,
"text": "(Neubig et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Baseline models In recent years many neural network architectures with increasing sophistication were applied to the ABSC task (Nguyen and Shirai, 2015; Vo and Zhang, 2015; Tang et al., 2016a,b; Wang et al., 2016; Zhang et al., 2016; Ruder et al., 2016; Ma et al., 2017; Liu and Zhang, 2017; Chen et al., 2017; Wang et al., 2018b,a; Fan et al., 2018a,b; Li et al., 2018; Ouyang and Su, 2018) . We compare to a series of state-of-theart ABSC neural classifiers that participated in the shared tasks. TDLSTM-ATT (Tang et al., 2016a) encodes the information around an aspect using forward and backward LSTMs, followed by an attention mechanism. ATAE-LSTM (Wang et al., 2016) is an attention based LSTM variant. MM (Tang et al., 2016b ) is a deep memory network with multiple-hops of attention layers. RAM (Chen et al., 2017) uses multiple attention mechanisms combined with a recurrent neural networks and a weighted memory mechanism. LSTM+SynATT+TarRep (He et al., 2018a) is an attention based LSTM which incorporates syn- Table 1 : Average accuracies and Macro-F1 scores over five runs with random initialization along with their standard deviations. Bold: best results or within std of them. * indicates that the method's result is significantly better than all baseline methods, \u2020 indicates that the method's result is significantly better than all baselines methods that use the aspect-based data only, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively. Numbers for TDLSTM+Att,ATAE-LSTM,MM,RAM and LSTM+SynATT+TarRep are from (He et al., 2018a) . Numbers for Semisupervised are from (He et al., 2018b) .",
"cite_spans": [
{
"start": 127,
"end": 152,
"text": "(Nguyen and Shirai, 2015;",
"ref_id": null
},
{
"start": 153,
"end": 172,
"text": "Vo and Zhang, 2015;",
"ref_id": "BIBREF41"
},
{
"start": 173,
"end": 194,
"text": "Tang et al., 2016a,b;",
"ref_id": null
},
{
"start": 195,
"end": 213,
"text": "Wang et al., 2016;",
"ref_id": "BIBREF45"
},
{
"start": 214,
"end": 233,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF49"
},
{
"start": 234,
"end": 253,
"text": "Ruder et al., 2016;",
"ref_id": "BIBREF37"
},
{
"start": 254,
"end": 270,
"text": "Ma et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 271,
"end": 291,
"text": "Liu and Zhang, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 292,
"end": 310,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 311,
"end": 332,
"text": "Wang et al., 2018b,a;",
"ref_id": null
},
{
"start": 333,
"end": 353,
"text": "Fan et al., 2018a,b;",
"ref_id": null
},
{
"start": 354,
"end": 370,
"text": "Li et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 371,
"end": 391,
"text": "Ouyang and Su, 2018)",
"ref_id": "BIBREF31"
},
{
"start": 510,
"end": 530,
"text": "(Tang et al., 2016a)",
"ref_id": "BIBREF39"
},
{
"start": 652,
"end": 671,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF45"
},
{
"start": 711,
"end": 730,
"text": "(Tang et al., 2016b",
"ref_id": "BIBREF40"
},
{
"start": 802,
"end": 821,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 951,
"end": 969,
"text": "(He et al., 2018a)",
"ref_id": "BIBREF12"
},
{
"start": 1662,
"end": 1680,
"text": "(He et al., 2018a)",
"ref_id": "BIBREF12"
},
{
"start": 1719,
"end": 1737,
"text": "(He et al., 2018b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1021,
"end": 1028,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "tactic information into the attention mechanism and uses an auto-encoder structure to produce an aspect representations. All of these models are trained only on the small, fully-supervised ABSC datasets. \"Semisupervised\" is the semi-supervised setup of (He et al., 2018b) , it train an attentionbased LSTM model on 30,000 documents additional to an aspect-based train set, 10,000 documents to each class. We consider additional two simple but strong semi-supervised baselines. Sentence-BiLSTM is our BiLSTM model trained on the 10 4 sentence-level annotations, and applied as-is to the individual fragments. Sentence-BiLSTM+Finetuning is the same model, but finetuned on the aspect-based data as explained above. Finetuning is performed using our own implementation of the attention-based model of He et al. (2018b) . 15 Both these models are on par with the fully-supervised ABSC models.",
"cite_spans": [
{
"start": 253,
"end": 271,
"text": "(He et al., 2018b)",
"ref_id": "BIBREF13"
},
{
"start": 798,
"end": 815,
"text": "He et al. (2018b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Empirical Proportions The proportion constraint setsp j based on the SemEval-2015 aspect-based train data are: p POS = {POS : 0.93, NEG : 0.06, NEU : 0.01} p NEG = {POS : 0.27, NEG : 0.7, NEU : 0.03} p NEU = {POS : 0.45, NEG : 0.41, NEU : 0.14} Table 1 compares these baselines to three XR conditions. 16 The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the (He et al., 2018b ,a) models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of He et al. (2018b) . This achieves the best results, and surpasses also the semi-supervised He et al. (2018b) baseline on accuracy, and matching it on F1. 17 We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1. The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs.",
"cite_spans": [
{
"start": 302,
"end": 304,
"text": "16",
"ref_id": null
},
{
"start": 793,
"end": 810,
"text": "(He et al., 2018b",
"ref_id": "BIBREF13"
},
{
"start": 1497,
"end": 1514,
"text": "He et al. (2018b)",
"ref_id": "BIBREF13"
},
{
"start": 1651,
"end": 1653,
"text": "17",
"ref_id": null
}
],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In each experiment in this section we estimate the proportions using the SemEval-2015 train set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further experiments",
"sec_num": "5.2"
},
{
"text": "Effect of unlabeled data size How does the XR training scale with the amount of unlabeled data? Figure 3a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of 5\u00d710 4 sentences is sufficient to surpass the results of the 10 4 sentencelevel trained classifier, and more unannotated data further improves the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 105,
"text": "Figure 3a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Further experiments",
"sec_num": "5.2"
},
{
"text": "Effect of Base-classifier Quality Our method requires a sentence level classifier C s to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train C s from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus. The results in Figure 3b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65 .58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.",
"cite_spans": [
{
"start": 913,
"end": 936,
"text": "53.81, 58.84, 61.81, 65",
"ref_id": null
}
],
"ref_spans": [
{
"start": 451,
"end": 460,
"text": "Figure 3b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Further experiments",
"sec_num": "5.2"
},
{
"text": "Effect of k The Stochastic Batched XR algorithm (Algorithm 1) samples a batch of k examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of k affect the results? We use k = 450 fragments in our main experiments, but smaller values of k reduce GPU memory load and may train better in practice. We tested our method with varying values of k on a sample of 5 \u00d7 10 4 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure 3c . Setting k = 5 result in low scores. Setting k = 25 yields better F1 score but with high variance across runs. For k = 100 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 541,
"end": 550,
"text": "Figure 3c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Further experiments",
"sec_num": "5.2"
},
{
"text": "The XR training can be performed also over pretrained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentencelevel predictions. (2) Using the pre-trained BERT representation (Devlin et al., 2018) . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of 5 \u00d7 10 4 , 10 5 and 6.7 \u00d7 10 5 sentences. Results in Figure 3d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.",
"cite_spans": [
{
"start": 256,
"end": 277,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 410,
"end": 419,
"text": "Figure 3d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pre-training, BERT",
"sec_num": "5.3"
},
{
"text": "BERT For the BERT experiments, we experiment with the BERT-base model 18 with k = 450 sets, 30 epochs for XR training or sentence level fine-tuning 19 and 15 epochs for aspect based finetuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen 20 . We compare the following setups: -BERT\u2192Aspect Based Finetuning: pretrained BERT model finetuned to the aspect based task. -BERT\u2192 10 4 : A pretrained BERT model finetuned to the sentence level task on the 10 4 sentences, and tested by predicting fragment-level sentiment. -BERT\u219210 4 \u2192Aspect Based Finetuning: pretrained BERT model finetuned to the sentence level task, and finetuned again to the aspect based one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training, BERT",
"sec_num": "5.3"
},
{
"text": "-BERT\u2192XR: pretrained BERT model followed by Table 2 : BERT pre-training: average accuracies and Macro-F1 scores from five runs and their stdev. * indicates that the method's result is significantly better than all baseline methods, \u2020 indicates that the method's result is significantly better than all non XR baseline methods, with p < 0.05 according to a one-tailed unpaired t-test. The data annotations S, N and A indicate training with Sentence-level, Noisy sentence-level and Aspect-level data respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-training, BERT",
"sec_num": "5.3"
},
{
"text": "XR training using our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-training, BERT",
"sec_num": "5.3"
},
{
"text": "-BERT\u2192 XR \u2192 Aspect Based Finetuning: pretrained BERT followed by XR training and then fine-tuned to the aspect level task. The results are presented in Table 2 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiL-STM with XR surpasses pre-trained BERT models and using XR training on top of the pre-trained BERT models substantially increases the results even further.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-training, BERT",
"sec_num": "5.3"
},
{
"text": "We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained BERT model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Note also that \u2200j|Uj| = 1 \u21d0\u21d2 LXR(\u03b8) = Lcross(\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the classifier does not need to be trainable or differentiable. It can be a human, a rule based system, a nonparametric model, a probabilistic model, a deep learning net-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In practice, our \"sentences\" are in fact short documents, some of which are composed of two or more sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Condition (2) coupled with selecting the highest node pushes towards complete phrases that contain opinions (which are usually expressed with adjectives or verbs), while the other conditions focus the attention on the desired pivot phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Despite(Mann and McCallum, 2007) claim regarding the temperature parameter, we observed lower performance when using it in our setup. However, in other setups this parameter might be found to be beneficial.12 https://nlp.stanford.edu/projects/ glove/13 We also tested the sentence BiLSTM baselines with a SemEval validation set, and received slightly lower results without a significant statistical difference.14 https://github.com/clab/dynet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We changed the LSTM component to a BiLSTM.16 To be consistent with existing research(He et al., 2018b), aspects with conflicted polarity are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that their setup uses clean and more balanced annotations, i.e. they use 10,000 samples for each label, which helps predicting the infrequent neutral sentiment. We however, use noisy sentence sentiment labels which are automatically obtained from a trained classifier, which trains on 10,000 samples in their natural imbalanced distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We could not fit k = 450 sets of BERT-large on our GPU.19 When fine-tuning to the sentence level task, we provide the sentence as input. When fine-tuning to the aspect-level task, we provide the sentence, a seperator and then the aspect.20 The other configuration parameters were the default ones in https://github.com/huggingface/ pytorch-pretrained-BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work was supported in part by The Israeli Science Foundation (grant number 1555/15).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A limited memory algorithm for bound constrained optimization",
"authors": [
{
"first": "H",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Peihuang",
"middle": [],
"last": "Byrd",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ciyou",
"middle": [],
"last": "Nocedal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 1995,
"venue": "SIAM J. Scientific Computing",
"volume": "16",
"issue": "5",
"pages": "1190--1208",
"other_ids": {
"DOI": [
"10.1137/0916069"
]
},
"num": null,
"urls": [],
"raw_text": "Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. 1995. A limited memory algorithm for bound constrained optimization. SIAM J. Scientific Computing, 16(5):1190-1208.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Guiding semi-supervision with constraint-driven learning",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "280--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 280-287. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Structured learning with constrained conditional models",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lev-Arie",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "Machine Learning",
"volume": "88",
"issue": "",
"pages": "399--431",
"other_ids": {
"DOI": [
"10.1007/s10994-012-5296-5"
]
},
"num": null,
"urls": [],
"raw_text": "Ming-Wei Chang, Lev-Arie Ratinov, and Dan Roth. 2012. Structured learning with constrained condi- tional models. Machine Learning, 88(3):399-431.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Recurrent attention network on memory for aspect sentiment analysis",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhongqian",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lidong",
"middle": [],
"last": "Bing",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "452--461",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1047"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on mem- ory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 452-461. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning from labeled features using generalized expectation criteria",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Druck",
"suffix": ""
},
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "595--602",
"other_ids": {
"DOI": [
"10.1145/1390334.1390436"
]
},
"num": null,
"urls": [],
"raw_text": "Gregory Druck, Gideon S. Mann, and Andrew McCal- lum. 2008. Learning from labeled features using generalized expectation criteria. In Proceedings of the 31st Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR 2008, Singapore, July 20-24, 2008, pages 595-602.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolution-based memory network for aspectbased sentiment analysis",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Qinghong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jiachen",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2018,
"venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018",
"volume": "",
"issue": "",
"pages": "1161--1164",
"other_ids": {
"DOI": [
"10.1145/3209978.3210115"
]
},
"num": null,
"urls": [],
"raw_text": "Chuang Fan, Qinghong Gao, Jiachen Du, Lin Gui, Ruifeng Xu, and Kam-Fai Wong. 2018a. Convolution-based memory network for aspect- based sentiment analysis. In The 41st International ACM SIGIR Conference on Research & Develop- ment in Information Retrieval, SIGIR 2018, Ann Ar- bor, MI, USA, July 08-12, 2018, pages 1161-1164.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multi-grained attention network for aspect-level sentiment classification",
"authors": [
{
"first": "Feifan",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3433--3442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feifan Fan, Yansong Feng, and Dongyan Zhao. 2018b. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3433-3442. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Crosslingual discriminative learning of sequence models with posterior regularization",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1996--2006",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev and Dipanjan Das. 2013. Cross- lingual discriminative learning of sequence models with posterior regularization. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1996-2006. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Posterior regularization for structured latent variable models",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "2001--2049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Jo\u00e3o Gra\u00e7a, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for structured latent variable models. Journal of Ma- chine Learning Research, 11:2001-2049.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Expectation maximization and posterior constraints",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jo\u00e3o Gra\u00e7a, Kuzman Ganchev, and Ben Taskar. 2007. Expectation maximization and posterior constraints. In Advances in Neural Information Processing Sys- tems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Sys- tems, Vancouver, British Columbia, Canada, De- cember 3-6, 2007, pages 569-576.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Prototype-driven learning for sequence models",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, Proceedings, June 4-9, 2006, New York, New York, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective attention modeling for aspect-level sentiment classification",
"authors": [
{
"first": "Ruidan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Wee Sun Lee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1121--1131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018a. Effective attention modeling for aspect-level sentiment classification. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 1121-1131. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploiting document knowledge for aspect-level sentiment classification",
"authors": [
{
"first": "Ruidan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Wee Sun Lee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "579--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018b. Exploiting document knowledge for aspect-level sentiment classification. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 579-585. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ballpark learning: Estimating labels from rough group comparisons",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Hope",
"suffix": ""
},
{
"first": "Dafna",
"middle": [],
"last": "Shahaf",
"suffix": ""
}
],
"year": 2016,
"venue": "Machine Learning and Knowledge Discovery in Databases -European Conference, ECML PKDD 2016",
"volume": "",
"issue": "",
"pages": "299--314",
"other_ids": {
"DOI": [
"10.1007/978-3-319-46227-1_19"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Hope and Dafna Shahaf. 2016. Ballpark learn- ing: Estimating labels from rough group compar- isons. In Machine Learning and Knowledge Dis- covery in Databases -European Conference, ECML PKDD 2016, Riva del Garda, Italy, September 19- 23, 2016, Proceedings, Part II, pages 299-314.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A framework for incorporating class priors into discriminative classification",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Knowledge Discovery and Data Mining, 9th Pacific-Asia Conference, PAKDD 2005",
"volume": "",
"issue": "",
"pages": "568--577",
"other_ids": {
"DOI": [
"10.1007/11430919_66"
]
},
"num": null,
"urls": [],
"raw_text": "Rong Jin and Yi Liu. 2005. A framework for in- corporating class priors into discriminative classifi- cation. In Advances in Knowledge Discovery and Data Mining, 9th Pacific-Asia Conference, PAKDD 2005, Hanoi, Vietnam, May 18-20, 2005, Proceed- ings, pages 568-577.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hierarchical attention based position-aware network for aspect-level sentiment analysis",
"authors": [
{
"first": "Lishuang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Anqiao",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "181--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lishuang Li, Yang Liu, and AnQiao Zhou. 2018. Hi- erarchical attention based position-aware network for aspect-level sentiment analysis. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 181-189. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning from measurements in exponential families",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {
"DOI": [
"10.1145/1553374.1553457"
]
},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning from measurements in exponential fam- ilies. In Proceedings of the 26th Annual Inter- national Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009, pages 641-648.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recurrent entity networks with delayed memory update for targeted aspect-based sentiment analysis",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "278--283",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2045"
]
},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018. Re- current entity networks with delayed memory update for targeted aspect-based sentiment analysis. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 278-283. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention modeling for targeted sentiment",
"authors": [
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "572--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangming Liu and Yue Zhang. 2017. Attention mod- eling for targeted sentiment. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 2, Short Papers, pages 572-577. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Interactive attention networks for aspect-level sentiment classification",
"authors": [
{
"first": "Dehong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence",
"volume": "2017",
"issue": "",
"pages": "4068--4074",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/568"
]
},
"num": null,
"urls": [],
"raw_text": "Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceed- ings of the Twenty-Sixth International Joint Con- ference on Artificial Intelligence, IJCAI 2017, Mel- bourne, Australia, August 19-25, 2017, pages 4068- 4074.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Simple, robust, scalable semi-supervised learning via expectation regularization",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2007,
"venue": "Machine Learning, Proceedings of the Twenty-Fourth International Conference (ICML 2007)",
"volume": "",
"issue": "",
"pages": "593--600",
"other_ids": {
"DOI": [
"10.1145/1273496.1273571"
]
},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and Andrew McCallum. 2007. Sim- ple, robust, scalable semi-supervised learning via expectation regularization. In Machine Learn- ing, Proceedings of the Twenty-Fourth International Conference (ICML 2007), Corvallis, Oregon, USA, June 20-24, 2007, pages 593-600.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "955--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and Andrew McCallum. 2010a. Gen- eralized expectation criteria for semi-supervised learning with weakly labeled data. Journal of Ma- chine Learning Research, 11:955-984.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "955--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and Andrew McCallum. 2010b. Gen- eralized expectation criteria for semi-supervised learning with weakly labeled data. Journal of Ma- chine Learning Research, 11:955-984.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentiment analysis of blogs by combining lexical knowledge with text classification",
"authors": [
{
"first": "Prem",
"middle": [],
"last": "Melville",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Gryc",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"D"
],
"last": "Lawrence",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1275--1284",
"other_ids": {
"DOI": [
"10.1145/1557019.1557156"
]
},
"num": null,
"urls": [],
"raw_text": "Prem Melville, Wojciech Gryc, and Richard D. Lawrence. 2009. Sentiment analysis of blogs by combining lexical knowledge with text classifica- tion. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, June 28 -July 1, 2009, pages 1275-1284.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Inferring latent attributes of twitter users with label regularization",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Ardehaly Ehsan Mohammady",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Culotta",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "185--195",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1019"
]
},
"num": null,
"urls": [],
"raw_text": "Ardehaly Ehsan Mohammady and Aron Culotta. 2015. Inferring latent attributes of twitter users with label regularization. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 185-195. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Supervised learning by training on aggregate outputs",
"authors": [
{
"first": "David",
"middle": [
"R"
],
"last": "Musicant",
"suffix": ""
},
{
"first": "Janara",
"middle": [
"M"
],
"last": "Christensen",
"suffix": ""
},
{
"first": "Jamie",
"middle": [
"F"
],
"last": "Olson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 7th IEEE International Conference on Data Mining (ICDM 2007)",
"volume": "",
"issue": "",
"pages": "252--261",
"other_ids": {
"DOI": [
"10.1109/ICDM.2007.50"
]
},
"num": null,
"urls": [],
"raw_text": "David R. Musicant, Janara M. Christensen, and Jamie F. Olson. 2007. Supervised learning by train- ing on aggregate outputs. In Proceedings of the 7th IEEE International Conference on Data Mining (ICDM 2007), October 28-31, 2007, Omaha, Ne- braska, USA, pages 252-261.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Clothiaux",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Malaviya",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Saphra",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopou- los, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Ku- mar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. CoRR, abs/1701.03980.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis",
"authors": [],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2509--2514",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1298"
]
},
"num": null,
"urls": [],
"raw_text": "Thien Hai Nguyen and Kiyoaki Shirai. 2015. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 2509-2514. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dependency parsing and attention network for aspect-level sentiment classification",
"authors": [
{
"first": "Zhifan",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Jindian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language Processing and Chinese Computing -7th CCF International Conference",
"volume": "",
"issue": "",
"pages": "391--403",
"other_ids": {
"DOI": [
"10.1007/978-3-319-99495-6_33"
]
},
"num": null,
"urls": [],
"raw_text": "Zhifan Ouyang and Jindian Su. 2018. Dependency parsing and attention network for aspect-level sen- timent classification. In Natural Language Process- ing and Chinese Computing -7th CCF International Conference, NLPCC 2018, Hohhot, China, August 26-30, 2018, Proceedings, Part I, pages 391-403.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Semeval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Smadi",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Orphee",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Veronique",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "19--30",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphee De Clercq, Veronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, N\u00faria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. Semeval-2016 task 5: Aspect based senti- ment analysis. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 19-30. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Semeval-2015 task 12: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "486--495",
"other_ids": {
"DOI": [
"10.18653/v1/S15-2082"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment anal- ysis. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 486-495. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Estimating labels from label proportions",
"authors": [
{
"first": "Novi",
"middle": [],
"last": "Quadrianto",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Tib\u00e9rio",
"middle": [
"S"
],
"last": "Caetano",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Machine Learning Research",
"volume": "10",
"issue": "",
"pages": "2349--2374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Novi Quadrianto, Alexander J. Smola, Tib\u00e9rio S. Cae- tano, and Quoc V. Le. 2009a. Estimating labels from label proportions. Journal of Machine Learning Re- search, 10:2349-2374.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Estimating labels from label proportions",
"authors": [
{
"first": "Novi",
"middle": [],
"last": "Quadrianto",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
},
{
"first": "Tib\u00e9rio",
"middle": [
"S"
],
"last": "Caetano",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Machine Learning Research",
"volume": "10",
"issue": "",
"pages": "2349--2374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Novi Quadrianto, Alexander J. Smola, Tib\u00e9rio S. Cae- tano, and Quoc V. Le. 2009b. Estimating labels from label proportions. Journal of Machine Learning Re- search, 10:2349-2374.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A hierarchical model of reviews for aspectbased sentiment analysis",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Parsa",
"middle": [],
"last": "Ghaffari",
"suffix": ""
},
{
"first": "John",
"middle": [
"G"
],
"last": "Breslin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "999--1005",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1103"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. 2016. A hierarchical model of reviews for aspect- based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 999-1005. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Incorporating prior knowledge into boosting",
"authors": [
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Rochery",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Mazin",
"suffix": ""
},
{
"first": "Narendra",
"middle": [
"K"
],
"last": "Rahim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2002,
"venue": "Machine Learning, Proceedings of the Nineteenth International Conference",
"volume": "",
"issue": "",
"pages": "538--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert E. Schapire, Marie Rochery, Mazin G. Rahim, and Narendra K. Gupta. 2002. Incorporating prior knowledge into boosting. In Machine Learning, Proceedings of the Nineteenth International Confer- ence (ICML 2002), University of New South Wales, Sydney, Australia, July 8-12, 2002, pages 538-545.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Effective lstms for target-dependent sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3298--3307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sen- timent classification. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 3298- 3307. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Aspect level sentiment classification with deep memory network",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "214--224",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1021"
]
},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory net- work. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 214-224. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Target-dependent twitter sentiment classification with rich automatic features",
"authors": [
{
"first": "Duy-Tin",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015",
"volume": "",
"issue": "",
"pages": "1347--1353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of the Twenty-Fourth Inter- national Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1347-1353.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Aspect sentiment classification with both word-level and clause-level attention networks",
"authors": [
{
"first": "Jingjing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yangyang",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018",
"volume": "",
"issue": "",
"pages": "4439--4445",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/617"
]
},
"num": null,
"urls": [],
"raw_text": "Jingjing Wang, Jie Li, Shoushan Li, Yangyang Kang, Min Zhang, Luo Si, and Guodong Zhou. 2018a. As- pect sentiment classification with both word-level and clause-level attention networks. In Proceed- ings of the Twenty-Seventh International Joint Con- ference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4439-4445.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Cross-lingual projected expectation regularization for weakly supervised learning",
"authors": [
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "TACL",
"volume": "2",
"issue": "",
"pages": "55--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengqiu Wang and Christopher D. Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. TACL, 2:55-66.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Target-sensitive memory networks for aspect sentiment classification",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sahisnu",
"middle": [],
"last": "Mazumder",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mianwei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "957--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018b. Target-sensitive mem- ory networks for aspect sentiment classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 957-967. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Attention-based lstm for aspect-level sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1058"
]
},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, xiaoyan zhu, and Li Zhao. 2016. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 606-615. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Learning with target prior",
"authors": [
{
"first": "Zuoguan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Gerwin",
"middle": [],
"last": "Schalk",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zuoguan Wang, Siwei Lyu, Gerwin Schalk, and Qiang Ji. 2012. Learning with target prior. In Advances in Neural Information Processing Systems 25: 26th",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Annual Conference on Neural Information Processing Systems",
"authors": [],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "2240--2248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference on Neural Information Process- ing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 2240-2248.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Feature-enhanced attention network for target-dependent sentiment classification",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chaoxue",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Lei",
"suffix": ""
}
],
"year": 2018,
"venue": "Neurocomputing",
"volume": "307",
"issue": "",
"pages": "91--97",
"other_ids": {
"DOI": [
"10.1016/j.neucom.2018.04.042"
]
},
"num": null,
"urls": [],
"raw_text": "Min Yang, Qiang Qu, Xiaojun Chen, Chaoxue Guo, Ying Shen, and Kai Lei. 2018. Feature-enhanced at- tention network for target-dependent sentiment clas- sification. Neurocomputing, 307:91-97.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gated neural networks for targeted sentiment analysis",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Duy-Tin",
"middle": [],
"last": "Vo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3087--3093",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Duy-Tin Vo. 2016. Gated neural networks for targeted sentiment anal- ysis. In Proceedings of the Thirtieth AAAI Con- ference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 3087-3093.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Bayesian inference with posterior regularization and applications to infinite latent svms",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1799--1847",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Zhu, Ning Chen, and Eric P. Xing. 2014. Bayesian inference with posterior regularization and applica- tions to infinite latent svms. Journal of Machine Learning Research, 15(1):1799-1847.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Macro-F1 scores for the entire SemEval-2016 dataset of the different analyses. (a) the contribution of unlabeled data. (b) the effect of sentence classifier quality. (c) the effect of k. (d) the effect of sentence-level pretraining vs. corpus size."
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Data Method</td><td colspan=\"2\">SemEval-15 Acc. Macro-F1</td><td>Acc.</td><td colspan=\"2\">SemEval-16 Macro-F1</td></tr><tr><td>A</td><td>TDLSTM+ATT (Tang et al., 2016a)</td><td>77.10</td><td>59.46</td><td colspan=\"2\">83.11</td><td>57.53</td></tr><tr><td>A</td><td>ATAE-LSTM (Wang et al., 2016)</td><td>78.48</td><td>62.84</td><td colspan=\"2\">83.77</td><td>61.71</td></tr><tr><td>A</td><td>MM (Tang et al., 2016b)</td><td>77.89</td><td>59.52</td><td colspan=\"2\">83.04</td><td>57.91</td></tr><tr><td>A</td><td>RAM (Chen et al., 2017)</td><td>79.98</td><td>60.57</td><td colspan=\"2\">83.88</td><td>62.14</td></tr><tr><td>A</td><td>LSTM+SynATT+TarRep (He et al., 2018a)</td><td>81.67</td><td>66.05</td><td colspan=\"2\">84.61</td><td>67.45</td></tr><tr><td colspan=\"2\">S+A Semisupervised (He et al., 2018b)</td><td>81.30</td><td>68.74</td><td colspan=\"2\">85.58</td><td>69.76</td></tr><tr><td>S</td><td>BiLSTM-10 4 Sentence Training</td><td>80.24 \u00b11.64</td><td>61.89 \u00b10.94</td><td colspan=\"2\">80.89 \u00b12.79</td><td>61.40 \u00b12.49</td></tr><tr><td colspan=\"2\">S+A BiLSTM-10 4 Sentence Training \u2192Aspect Based Finetuning</td><td>77.75 \u00b12.09</td><td>60.83 \u00b14.53</td><td colspan=\"2\">84.87\u00b10.31</td><td>61.87 \u00b15.44</td></tr><tr><td>N</td><td>BiLSTM-XR-Dev Estimation</td><td colspan=\"4\">83.31 * \u00b1 0.62 62.24 \u00b10.66 87.68 * \u00b1 0.47</td><td>63.23 \u00b11.81</td></tr><tr><td>N</td><td>BiLSTM-XR</td><td>83.31</td><td/><td/><td/></tr></table>",
"num": null,
"text": "\u00b1 0.77 64.42 \u00b1 2.78 88.12 * \u00b1 0.24 68.60 \u00b11.79 N+A BiLSTM-XR \u2192Aspect Based Finetuning 83.44 * \u00b1 0.74 67.23 \u00b1 1.42 87.66 * \u00b1 0.28 71.19 \u2020\u00b1 1.40",
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>Data</td><td>Training</td><td colspan=\"2\">SemEval-15 Acc. Macro-F1</td><td>Acc.</td><td colspan=\"2\">SemEval-16 Macro-F1</td></tr><tr><td>N</td><td>BiLSTM-XR</td><td>83.31 \u00b1 0.77</td><td colspan=\"3\">64.42 \u00b1 2.78 88.12 \u00b1 0.24</td><td>68.60 \u00b11.79</td></tr><tr><td>N+A</td><td>BiLSTM-XR \u2192Aspect Based Finetuning</td><td>83.44 \u00b1 0.74</td><td colspan=\"3\">67.23 \u00b1 1.42 87.66 \u00b1 0.28</td><td>71.19 \u00b1 1.40</td></tr><tr><td>A</td><td>BERT\u2192Aspect Based Finetuning</td><td>81.87 \u00b11.12</td><td>59.24 \u00b14.94</td><td colspan=\"2\">85.81 \u00b11.07</td><td>62.46 \u00b16.76</td></tr><tr><td>S</td><td>BERT\u219210 4 Sent Finetuning</td><td>83.29 \u00b10.77</td><td>66.79 \u00b11.99</td><td colspan=\"2\">84.53 \u00b11.66</td><td>65.53 \u00b13.03</td></tr><tr><td>S+A</td><td>BERT\u219210 4 Sent Finetuning \u2192Aspect Based Finetuning</td><td>82.54 \u00b11.21</td><td>64.13 \u00b15.05</td><td colspan=\"2\">85.67 \u00b11.14</td><td>64.13 \u00b17.07</td></tr><tr><td>N</td><td>BERT\u2192XR</td><td>85.46 * \u00b10.59</td><td>66.86 \u00b12.8</td><td colspan=\"2\">89.5 * \u00b10.55</td><td>70.86 \u2020\u00b12.96</td></tr><tr><td>N+A</td><td>BERT\u2192XR \u2192Aspect Based Finetuning</td><td>85.78</td><td/><td/><td/></tr></table>",
"num": null,
"text": "\u00b1 0.65 68.74 \u00b1 1.36 89.57 * \u00b1 1.4 73.89",
"html": null
}
}
}
}