{ "paper_id": "Q17-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:12:38.964841Z" }, "title": "Aspect-augmented Adversarial Networks for Domain Adaptation", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "yuanzh@csail.mit.edu" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "regina@csail.mit.edu" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "", "affiliation": { "laboratory": "Artificial Intelligence Laboratory", "institution": "Massachusetts Institute of Technology", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset. 1", "pdf_parse": { "paper_id": "Q17-1036", "_pdf_hash": "", "abstract": [ { "text": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many NLP problems are naturally multitask classification problems. For instance, values extracted for different fields from the same document are often dependent as they share the same context. Existing systems rely on this dependence (transfer across fields) to improve accuracy. In this paper, we consider a version of this problem where there is a clear dependence between two tasks but annotations are available only for the source task. For example, Figure 1 : A snippet of a breast pathology report with diagnosis results for two types of disease (aspects): carcinoma (IDC) and lymph invasion (LVI). Note how the same phrase indicating positive results (e.g. identified) is applicable to both aspects. A transfer model learns to map other key phrases (e.g. Grade 3) to such shared indicators.", "cite_spans": [], "ref_spans": [ { "start": 455, "end": 463, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the target goal may be to classify pathology reports (shown in Figure 1 ) for the presence of lymph invasion but training data are available only for carcinoma in the same reports. We call this problem aspect transfer as the objective is to learn to classify examples differently, focusing on different aspects, without access to target aspect labels. Clearly, such transfer learning is possible only with auxiliary information relating the tasks together.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The key challenge is to articulate and incorporate commonalities across the tasks. For instance, in classifying reviews of different products, sentiment words (referred to as pivots) can be shared across the products. This commonality enables one to align feature spaces across multiple products, enabling useful transfer (?) . Similar properties hold in other contexts and beyond sentiment analysis. Figure 1 shows that certain words and phrases like \"identified\", which indicates the presence of a histological property, are applicable to both carcinoma and lymph invasion. Our method learns and relies on such shared indicators, and utilizes them for effective transfer.", "cite_spans": [ { "start": 322, "end": 325, "text": "(?)", "ref_id": null } ], "ref_spans": [ { "start": 401, "end": 409, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The unique feature of our transfer problem is that both the source and the target classifiers operate over the same domain, i.e., the same examples. In this setting, traditional transfer methods will always predict the same label for both aspects and thus leading to failure. Instead of supplying the target classifier with direct training labels, our approach builds on a secondary relationship between the tasks using aspect-relevance annotations of sentences. These relevance annotations indicate a possibility that the answer could be found in a sentence, not what the answer is. One can often write simple keyword rules that identify sentence relevance to a particular aspect through representative terms, e.g., specific hormonal markers in the context of pathology reports. Annotations of this kind can be readily provided by domain experts, or extracted from medical literature such as codex rules in pathology (Pantanowitz et al., 2008) . We assume a small number of relevance annotations (rules) pertaining to both source and target aspects as a form of weak supervision. We use this sentence-level aspect relevance to learn how to encode the examples (e.g., pathology reports) from the point of view of the desired aspect. In our approach, we construct different aspect-dependent encodings of the same document by softly selecting sentences relevant to the aspect of interest. The key to effective transfer is how these encodings are aligned.", "cite_spans": [ { "start": 918, "end": 944, "text": "(Pantanowitz et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This encoding mechanism brings the problem closer to the realm of standard domain adaptation, where the derived aspect-specific representations are considered as different domains. Given these representations, our method learns a label classifier shared between the two domains. To ensure that it can be adjusted only based on the source class labels, and that it also reasonably applies to the target encodings, we must align the two sets of encoded examples. 2 Learning this alignment is pos-sible because, as discussed above, some keywords are directly transferable and can serve as anchors for constructing this invariant space. To learn this invariant representation, we introduce an adversarial domain classifier analogous to the recent successful use of adversarial training in computer vision (Ganin and Lempitsky, 2014) . The role of the domain classifier (adversary) is to learn to distinguish between the two types of encodings. During training we update the encoder with an adversarial objective to cause the classifier to fail. The encoder therefore learns to eliminate aspect-specific information so that encodings look invariant (as sets) to the classifier, thus establishing aspect-invariance encodings and enabling transfer. All three components in our approach, 1) aspect-driven encoding, 2) classification of source labels, and 3) domain adversary, are trained jointly (concurrently) to complement and balance each other.", "cite_spans": [ { "start": 801, "end": 828, "text": "(Ganin and Lempitsky, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Adversarial training of domain and label classifiers can be challenging to stabilize. In our setting, sentences are encoded with a convolutional model. Feedback from adversarial training can be an unstable guide for how the sentences should be encoded. To address this issue, we incorporate an additional word-level auto-encoder reconstruction loss to ground the convolutional processing of sentences. We empirically demonstrate that this additional objective yields richer and more diversified feature representations, improving transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our approach on pathology reports (aspect transfer) as well as on a more standard review dataset (domain adaptation). On the pathology dataset, we explore cross-aspect transfer across different types of breast disease. Specifically, we test on six adaptation tasks, consistently outperforming all other baselines. Overall, our full model achieves 27% and 20.2% absolute improvement arising from aspect-driven encoding and adversarial training respectively. Moreover, our unsupervised adaptation method is only 5.7% behind the accuracy of a supervised target model. On the review dataset, we test adaptations from hotel to restaurant reviews. Our model outperforms the marginalized denoising autoencoder (Chen et al., 2012) by 5%. Finally, we examine and illustrate the impact of individual components on the resulting performance.", "cite_spans": [ { "start": 715, "end": 734, "text": "(Chen et al., 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing approaches commonly induce abstract representations without pulling apart different aspects in the same example, and therefore are likely to fail on the aspect transfer problem. The majority of these prior methods first learn a task-independent representation, and then train a label predictor (e.g. SVM) on this representation in a separate step. For example, earlier researches employ a shared autoencoder (Glorot et al., 2011; Chopra et al., 2013) to learn a cross-domain representation. Chen et al. (2012) further improve and stabilize the representation learning by utilizing marginalized denoising autoencoders. Later, Zhou et al. (2016) propose to minimize domain-shift of the autoencoder in a linear data combination manner. Other researches have focused on learning transferable representations in an end-to-end fashion. Examples include using transduction learning for object recognition (Sener et al., 2016) and using residual transfer networks for image classification (Long et al., 2016) . In contrast, we use adversarial training to encourage learning domaininvariant features in a more explicit way. Our approach offers another two advantages over prior work. First, we jointly optimize features with the final classification task while many previous works only learn task-independent features using autoencoders. Second, our model can handle traditional domain transfer as well as aspect transfer, while previous methods can only handle the former.", "cite_spans": [ { "start": 417, "end": 438, "text": "(Glorot et al., 2011;", "ref_id": "BIBREF17" }, { "start": 439, "end": 459, "text": "Chopra et al., 2013)", "ref_id": "BIBREF12" }, { "start": 500, "end": 518, "text": "Chen et al. (2012)", "ref_id": "BIBREF9" }, { "start": 634, "end": 652, "text": "Zhou et al. (2016)", "ref_id": "BIBREF48" }, { "start": 907, "end": 927, "text": "(Sener et al., 2016)", "ref_id": "BIBREF35" }, { "start": 990, "end": 1009, "text": "(Long et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work Domain Adaptation for Deep Learning", "sec_num": "2" }, { "text": "Our approach closely relates to the idea of domainadversarial training. Adversarial networks were originally developed for image generation (Goodfellow et al., 2014; Makhzani et al., 2015; Springenberg, 2015; Radford et al., 2016; Taigman et al., 2016) , and were later applied to domain adaptation in computer vision (Ganin and Lempitsky, 2014; Ganin et al., 2015; Bousmalis et al., 2016; Tzeng et al., 2014) and speech recognition (Shinohara, 2016) . The core idea of these approaches is to promote the emergence of invariant image features by optimizing the feature extractor as an adversary against the domain classifier. While Ganin et al. (2015) also apply this idea to sentiment analysis, their practical gains have remained limited.", "cite_spans": [ { "start": 140, "end": 165, "text": "(Goodfellow et al., 2014;", "ref_id": "BIBREF18" }, { "start": 166, "end": 188, "text": "Makhzani et al., 2015;", "ref_id": null }, { "start": 189, "end": 208, "text": "Springenberg, 2015;", "ref_id": "BIBREF37" }, { "start": 209, "end": 230, "text": "Radford et al., 2016;", "ref_id": "BIBREF33" }, { "start": 231, "end": 252, "text": "Taigman et al., 2016)", "ref_id": "BIBREF38" }, { "start": 318, "end": 345, "text": "(Ganin and Lempitsky, 2014;", "ref_id": "BIBREF15" }, { "start": 346, "end": 365, "text": "Ganin et al., 2015;", "ref_id": "BIBREF16" }, { "start": 366, "end": 389, "text": "Bousmalis et al., 2016;", "ref_id": "BIBREF3" }, { "start": 390, "end": 409, "text": "Tzeng et al., 2014)", "ref_id": "BIBREF39" }, { "start": 433, "end": 450, "text": "(Shinohara, 2016)", "ref_id": "BIBREF36" }, { "start": 632, "end": 651, "text": "Ganin et al. (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Adversarial Learning in Vision and NLP", "sec_num": null }, { "text": "Our approach presents two main departures. In computer vision, adversarial learning has been used for transferring across domains, while our method can also handle aspect transfer. In addition, we introduce a reconstruction loss which results in more robust adversarial training. We believe that this formulation will benefit other applications of adversarial training, beyond the ones described in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adversarial Learning in Vision and NLP", "sec_num": null }, { "text": "In our work, we use a small set of keywords as a source of weak supervision for aspect-relevance scoring. This relates to prior work on utilizing prototypes and seed words in semi-supervised learning (Haghighi and Klein, 2006; Grenager et al., 2005; Chang et al., 2007; Mann and McCallum, 2010; Jagarlamudi et al., 2012; Li et al., 2012; Eisenstein, 2017) . All these prior approaches utilize prototype annotations primarily targeting model bootstrapping but not for learning representations. In contrast, our model uses provided keywords to learn aspect-driven encoding of input examples.", "cite_spans": [ { "start": 200, "end": 226, "text": "(Haghighi and Klein, 2006;", "ref_id": "BIBREF20" }, { "start": 227, "end": 249, "text": "Grenager et al., 2005;", "ref_id": "BIBREF19" }, { "start": 250, "end": 269, "text": "Chang et al., 2007;", "ref_id": "BIBREF8" }, { "start": 270, "end": 294, "text": "Mann and McCallum, 2010;", "ref_id": "BIBREF28" }, { "start": 295, "end": 320, "text": "Jagarlamudi et al., 2012;", "ref_id": "BIBREF22" }, { "start": 321, "end": 337, "text": "Li et al., 2012;", "ref_id": "BIBREF24" }, { "start": 338, "end": 355, "text": "Eisenstein, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised Learning with Keywords", "sec_num": null }, { "text": "One may view our aspect-relevance scorer as a sentence-level \"semi-supervised attention\", in which relevant sentences receive more attention during feature extraction. While traditional attention-based models typically induce attention in an unsupervised manner, they have to rely on a large amount of labeled data for the target task (Bahdanau et al., 2015; Rush et al., 2015; Chen et al., 2015; Cheng et al., 2016; Xu and Saenko, 2016; Yang et al., 2016; Martins and Astudillo, 2016; Lei et al., 2016) . Unlike these methods, our approach assumes no label annotations in the target domain. Other researches have focused on utilizing human-provided rationales as \"supervised attention\" to improve prediction (Zaidan et al., 2007; Marshall et al., 2015; Zhang et al., 2016; Brun et al., 2016) . In contrast, our model only assumes access to a small set of keywords as a source of weak supervision. Moreover, all these prior approaches focus on in-domain classification. In this paper, however, we study the task in the context of domain adaptation.", "cite_spans": [ { "start": 335, "end": 358, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF0" }, { "start": 359, "end": 377, "text": "Rush et al., 2015;", "ref_id": "BIBREF34" }, { "start": 378, "end": 396, "text": "Chen et al., 2015;", "ref_id": "BIBREF10" }, { "start": 397, "end": 416, "text": "Cheng et al., 2016;", "ref_id": "BIBREF11" }, { "start": 417, "end": 437, "text": "Xu and Saenko, 2016;", "ref_id": "BIBREF42" }, { "start": 438, "end": 456, "text": "Yang et al., 2016;", "ref_id": "BIBREF45" }, { "start": 457, "end": 485, "text": "Martins and Astudillo, 2016;", "ref_id": "BIBREF30" }, { "start": 486, "end": 503, "text": "Lei et al., 2016)", "ref_id": "BIBREF23" }, { "start": 709, "end": 730, "text": "(Zaidan et al., 2007;", "ref_id": "BIBREF46" }, { "start": 731, "end": 753, "text": "Marshall et al., 2015;", "ref_id": "BIBREF29" }, { "start": 754, "end": 773, "text": "Zhang et al., 2016;", "ref_id": "BIBREF47" }, { "start": 774, "end": 792, "text": "Brun et al., 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Attention Mechanism in NLP", "sec_num": null }, { "text": "Existing multitask learning methods focus on the case where supervision is available for all tasks. A typical architecture involves using a shared encoder with a separate clas-sifier for each task. (Caruana, 1998; Pan and Yang, 2010; Collobert and Weston, 2008; Liu et al., 2015; Bordes et al., 2012) . In contrast, our work assumes labeled data only for the source aspect. We train a single classifier for both aspects by learning aspectinvariant representation that enables the transfer.", "cite_spans": [ { "start": 198, "end": 213, "text": "(Caruana, 1998;", "ref_id": "BIBREF7" }, { "start": 214, "end": 233, "text": "Pan and Yang, 2010;", "ref_id": "BIBREF31" }, { "start": 234, "end": 261, "text": "Collobert and Weston, 2008;", "ref_id": "BIBREF13" }, { "start": 262, "end": 279, "text": "Liu et al., 2015;", "ref_id": "BIBREF25" }, { "start": 280, "end": 300, "text": "Bordes et al., 2012)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Multitask Learning", "sec_num": null }, { "text": "We begin by formalizing aspect transfer with the idea of differentiating it from standard domain adaptation. In our setup, we have two classification tasks called the source and the target tasks. In contrast to source and target tasks in domain adaptation, both of these tasks are defined over the same set of examples (here documents, e.g., pathology reports). What differentiates the two classification tasks is that they pertain to different aspects in the examples. If each training document were annotated with both the source and the target aspect labels, the problem would reduce to multi-label classification. However, in our setting training labels are available only for the source aspect so the goal is to solve the target task without any associated training label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "To fix the notation, let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "d = {s i } |d| i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "be a document that consists of a sequence of |d| sentences s i . Given a document d, and the aspect of interest, we wish to predict the corresponding aspect-dependent class label y (e.g., y \u2208 {\u22121, 1}). We assume that the set of possible labels are the same across aspects. We use y s l;k to denote the k-th coordinate of a one-hot vector indicating the correct training source aspect label for document d l . Target aspect labels are not available during training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Beyond labeled documents for the source aspect {d l , y s l } l\u2208L , and shared unlabeled documents for source and target aspects {d l } l\u2208U , we assume further that we have relevance scores pertaining to each aspect. The relevance is given per sentence, for some subset of sentences across the documents, and indicates the possibility that the answer for that document would be found in the sentence but without indicating which way the answer goes. Relevance is always aspect dependent yet often easy to provide in the form of simple keyword rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "We use r a i \u2208 {0, 1} to denote the given relevance label pertaining to aspect a for sentence s i . Only a small subset of sentences in the training set have as-sociated relevance labels. Let R = {(a, l, i)} denote the index set of relevance labels such that if (a, l, i) \u2208 R then aspect a's relevance label r a l,i is available for the i th sentence in document d l . In our case relevance labels arise from aspect-dependent keyword matches. r a i = 1 when the sentence contains any keywords pertaining to aspect a and r a i = 0 if it has any keywords of other aspects. 3 Separate subsets of relevance labels are available for each aspect as the keywords differ.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "The transfer that is sought here is between two tasks over the same set of examples rather than between two different types of examples for the same task as in standard domain adaptation. However, the two formulations can be reconciled if full relevance annotations are assumed to be available during training and testing. In this scenario, we could simply lift the sets of relevant sentences from each document as new types of documents. The goal would be then to learn to classify documents of type T (consisting of sentences relevant to the target aspect) based on having labels only for type S (source) documents, a standard domain adaptation task. Our problem is more challenging as the aspect-relevance of sentences must be learned from limited annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Finally, we note that the aspect transfer problem and the method we develop to solve it work the same even when source and target documents are a priori different, something we will demonstrate later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Formulation", "sec_num": "3" }, { "text": "Our model consists of three key components as shown in Figure 2 . Each document is encoded in a relevance weighted, aspect-dependent manner (green, left part of Figure 2 ) and classified using the label predictor (blue, top-right). During training, the encoded documents are also passed on to the domain classifier (orange, bottom-right). The role of the domain classifier, as the adversary, is to ensure that the aspect-dependent encodings of documents are distributionally matched. This matching justifies the use of the same end-classifier to provide the predicted label regardless of the task (aspect). To encode a document, the model first maps each sentence into a vector and then passes the vector to a scoring network to determine whether the sentence is relevant for the chosen aspect. These predicted relevance scores are used to obtain document vectors by taking relevance-weighted sum of the associated sentence vectors. Thus, the manner in which the document vector is constructed is always aspectdependent due to the chosen relevance weights.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 161, "end": 169, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Overview of our approach", "sec_num": "4.1" }, { "text": "During training, the resulting adjusted document vectors are consumed by the two classifiers. The primary label classifier aims to predict the source labels (when available), while the domain classifier determines whether the document vector pertains to the source or target aspect, which is the label that we know by construction. Furthermore, we jointly update the document encoder with a reverse of the gradient from the domain classifier, so that the encoder learns to induce document representations that fool the domain classifier. The resulting encoded representations will be aspect-invariant, facilitating transfer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach", "sec_num": "4.1" }, { "text": "Our adversarial training scheme uses all the training losses concurrently to adjust the model parameters. During testing, we simply encode each test document in a target-aspect dependent manner, and apply the same label predictor. We expect that the same label classifier does well on the target task since it solves the source task, and operates on relevance-weighted representations that are matched across the tasks. While our method is designed to work in the extreme setting that the examples for the two aspects are the same, this is by no means a re-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach", "sec_num": "4.1" }, { "text": "reconstruction of ductal carcinoma is identified \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 \u2026 sentence embeddings max-pooling: \u2026 x 0 x 1 x 2 x 3 x 2 = tanh(W c h 2 + b c ) x 2 h 1 h 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of our approach", "sec_num": "4.1" }, { "text": "x sen = max{h1, h2, . . .} Figure 3 : Illustration of the convolutional model and the reconstruction of word embeddings from the associated convolutional layer. quirement. Our method will also work fine in the more traditional domain adaptation setting, which we will demonstrate later.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Overview of our approach", "sec_num": "4.1" }, { "text": "We apply a convolutional model illustrated in Figure 3 to each sentence s i to obtain sentence-level vector embeddings x sen i . The use of RNNs or bi-LSTMs would result in more flexible sentence embeddings but based on our initial experiments, we did not observe any significant gains over the simpler CNNs.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 54, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "We further ground the resulting sentence embeddings by including an additional word-level reconstruction step in the convolutional model. The purpose of this reconstruction step is to balance adversarial training signals propagating back from the domain classifier. Specifically, it forces the sentence encoder to keep rich word-level information in contrast to adversarial training that seeks to eliminate aspect specific features. We provide an empirical analysis of the impact of this reconstruction in the experiment section (Section 7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "More concretely, we reconstruct word embedding from the corresponding convolutional layer, as shown in Figure 3 . 4 We use x i,j to denote the embedding of the j-th word in sentence s i . Let h i,j be the convolutional output when x i,j is at the center of the window. We reconstruct x i,j b\u0177", "cite_spans": [ { "start": 114, "end": 115, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x i,j = tanh(W c h i,j + b c )", "eq_num": "(1)" } ], "section": "Sentence embedding", "sec_num": null }, { "text": "where W c and b c are parameters of the reconstruction layer. The loss associated with the reconstruction for document d is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "L rec (d) = 1 n i,j ||x i,j \u2212 tanh(x i,j )|| 2 2 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "where n is the number of tokens in the document and indexes i, j identify the sentence and word, respectively. The overall reconstruction loss L rec is obtained by summing over all labeled/unlabeled documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence embedding", "sec_num": null }, { "text": "We use a small set of keyword rules to generate binary relevance labels, both positive (r = 1) and negative (r = 0). These labels represent the only supervision available to predict relevance. The prediction is made on the basis of the sentence vector x sen i passed through a feedforward network with a ReLU output unit. The network has a single shared hidden layer and a separate output layer for each aspect. Note that our relevance prediction network is trained as a non-negative regression model even though the available labels are binary, as relevance varies more on a linear rather than binary scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevance prediction", "sec_num": null }, { "text": "Given relevance labels indexed by R = {(a, l, i)}, we minimize", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relevance prediction", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L rel = (a,l,i)\u2208R r a l,i \u2212r a l,i 2", "eq_num": "(3)" } ], "section": "Relevance prediction", "sec_num": null }, { "text": "wherer a l,i is the predicted (non-negative) relevance score pertaining to aspect a for the i th sentence in document d l , as shown in the left part of Figure 2 . r a l,i , defined earlier, is the given binary (0/1) relevance label. We use a score in [0, 1] scale because it can be naturally used as a weight for vector combinations, as shown next.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 161, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Relevance prediction", "sec_num": null }, { "text": "The initial vector representation for each document such as d l is obtained as a relevance weighted combination of the associated sentence vectors, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document encoding", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x doc,a l = ir a l,i \u2022 x sen l,i ir a l,i", "eq_num": "(4)" } ], "section": "Document encoding", "sec_num": null }, { "text": "The resulting vector selectively encodes information from the sentences based on relevance to the focal aspect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document encoding", "sec_num": null }, { "text": "The manner in which document vectors arise from sentence vectors means that they will retain aspect-specific information that will hinder transfer across aspects. To help remove non-transferable information, we add a transformation layer to map the initial document vectors x doc,a l to their domain invariant (as a set) versions, as shown in Figure 2 . Specifically, the transformed representation is given by x tr,a l = W tr x doc,a l . Meanwhile, the transformation has to be strongly regularized lest the gradient from the adversary would wipe out all the document signal. We add the following regularization term", "cite_spans": [], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transformation layer", "sec_num": null }, { "text": "\u2126 tr = \u03bb tr ||W tr \u2212 I|| 2 F (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation layer", "sec_num": null }, { "text": "to discourage significant deviation away from identity I. \u03bb tr is a regularization parameter that has to be set separately based on validation performance. We show an empirical analysis of the impact of this transformation layer in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation layer", "sec_num": null }, { "text": "As shown in the topright part of Figure 2 , the classifier takes in the adjusted document representation as an input and predicts a probability distribution over the possible class labels. The classifier is a feed-forward network with a single hidden layer using ReLU activations and a softmax output layer over the possible class labels. Note that we train only one label classifier that is shared by both aspects. The classifier operates the same regardless of the aspect to which the document was encoded. It must therefore be cooperatively learned together with the encodings. Letp l;k denote the predicted probability of class k for document d l when the document is encoded from the point of view of the source aspect. Recall that [y s l;1 , . . . , y s l;m ] is a one-hot vector for the correct (given) source class label for document d l , hence also a distribution. We use the cross-entropy loss for the label classifier", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 41, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Primary label classifier", "sec_num": null }, { "text": "L lab = l\u2208L \u2212 m k=1 y s l;k logp l;k (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Primary label classifier", "sec_num": null }, { "text": "As shown in the bottomright part of Figure 2 , the domain classifier functions as an adversary to ensure that the documents encoded with respect to the source and target aspects look the same as sets of examples. The invariance is achieved when the domain classifier (as the adversary) fails to distinguish between the two. Structurally, the domain classifier is a feed-forward network with a single ReLU hidden layer and a softmax output layer over the two aspect labels.", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 44, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Domain classifier", "sec_num": null }, { "text": "Let y a = [y a 1 , y a 2 ] denote the one-hot domain label vector for aspect a \u2208 {s, t}. In other words, y s = [1, 0] and y t = [0, 1]. We useq k (x tr,a l ) as the predicted probability that the domain label is k when the domain classifier receives x tr,a l as the input. The domain classifier is trained to minimize", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain classifier", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L dom = l\u2208L\u222aU a\u2208{s,t} \u2212 2 k=1 y a k logq k (x tr,a l )", "eq_num": "(7)" } ], "section": "Domain classifier", "sec_num": null }, { "text": "We combine the individual component losses pertaining to word reconstruction, relevance labels, transformation layer regularization, source class labels, and domain adversary into an overall objective function", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint learning", "sec_num": "4.3" }, { "text": "L all = L rec + L rel + \u2126 tr + L lab \u2212 \u03c1L dom (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint learning", "sec_num": "4.3" }, { "text": "which is minimized with respect to the model parameters except for the adversary (domain classifier). The adversary is maximizing the same objective with respect to its own parameters. The last term \u2212\u03c1L dom corresponds to the objective of causing the domain classifier to fail. The proportionality constant \u03c1 controls the impact of gradients from the adversary on the document representation; the adversary itself is always directly minimizing L dom . All the parameters are optimized jointly using standard backpropagation (concurrent for the adversary). Each mini-batch is balanced by aspect, half coming from the source, the other half from the target. All the loss functions except L lab make use of both labeled and unlabeled documents. Additionally, it would be straightforward to add a loss term for target labels if they are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint learning", "sec_num": "4.3" }, { "text": "Pathology dataset This dataset contains 96.6k breast pathology reports collected from three hospitals (Yala et al., 2016) . A portion of this dataset is manually annotated with 20 categorical values, representing various aspects of breast disease. In our experiments, we focus on four aspects related to carcinomas and atypias: Ductal Carcinoma In-Situ (DCIS), Lobular Carcinoma In-Situ (LCIS), Invasive Ductal Carcinoma (IDC) and Atypical Lobular Hyperplasia (ALH). Each aspect is annotated using binary labels. We use 500 held out reports as our test set and use the rest of the labeled data as our training set: 23.8k reports for DCIS, 10.7k for LCIS, 22.9k for IDC, and 9.2k for ALH. Table 1 summarizes statistics of the dataset.", "cite_spans": [ { "start": 102, "end": 121, "text": "(Yala et al., 2016)", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 688, "end": 695, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "We explore the adaptation problem from one aspect to another. For example, we want to train a model on annotations of DCIS and apply it on LCIS. For each aspect, we use up to three common names as a source of supervision for learning the relevance scorer, as illustrated in Table 2 . Note that the provided list is by no means exhaustive. In fact Buckley et al. (2012) provide example of 60 different verbalizations of LCIS, not counting negations.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5" }, { "text": "Our second experiment is based on a domain transfer of sentiment classification. For the source domain, we use the hotel review dataset introduced in previous work (Wang et al., 2010; Wang et al., 2011) , and for the target domain, we use the restaurant review dataset from Yelp. 5 Both datasets have ratings on a scale of 1 to 5 stars. Following previous work (Blitzer et al., 2007) , we label reviews with ratings > 3 as positive and those with ratings < 3 as negative, discarding the rest. The hotel dataset includes a total of around 200k reviews collected from TripAdvisor, 6 so we split 100k as labeled and the other 100k as unlabeled data. We randomly select 200k restaurant reviews as the unlabeled data in the target domain. Our test set consists of 2k reviews. Table 1 summarizes the statistics of the review dataset.", "cite_spans": [ { "start": 164, "end": 183, "text": "(Wang et al., 2010;", "ref_id": "BIBREF40" }, { "start": 184, "end": 202, "text": "Wang et al., 2011)", "ref_id": "BIBREF41" }, { "start": 361, "end": 383, "text": "(Blitzer et al., 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 771, "end": 778, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Review dataset", "sec_num": null }, { "text": "The hotel reviews naturally have ratings for six aspects, including value, room quality, checkin service, room service, cleanliness and location. We use the first five aspects because the sixth aspect location has positive labels for over 95% of the reviews and thus the trained model will suffer from the lack of negative examples. The restaurant reviews, however, only have single ratings for an overall impression. Therefore, we explore the task of adaptation from each of the five hotel aspects to the restaurant domain. The hotel reviews dataset also provides a total of 280 keywords for different aspects that are generated by the bootstrapping method used in Wang et al. (2010) . We use those keywords as supervision for learning the relevance scorer.", "cite_spans": [ { "start": 666, "end": 684, "text": "Wang et al. (2010)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Review dataset", "sec_num": null }, { "text": "We first compare against a linear SVM trained on the raw bagof-words representation of labeled data in source. Second, we compare against our SourceOnly model that assumes no target domain data or keywords. It thus has no adversarial training or target aspect-relevance scoring. Next we compare METHOD SOURCE TARGET Keyword Lab. Unlab. Lab. Unlab. with marginalized Stacked Denoising Autoencoders (mSDA) (Chen et al., 2012) , a domain adaptation algorithm that outperforms both prior deep learning and shallow learning approaches. 7", "cite_spans": [ { "start": 404, "end": 423, "text": "(Chen et al., 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and our method", "sec_num": null }, { "text": "SVM \u00d7 \u00d7 \u00d7 \u00d7 SourceOnly \u00d7 \u00d7 mSDA \u00d7 \u00d7 AAN-NA \u00d7 AAN-NR \u00d7 \u00d7 In-Domain \u00d7 \u00d7 \u00d7 AAN-Full \u00d7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and our method", "sec_num": null }, { "text": "In the rest part of the paper, we name our method and its variants as AAN (Aspect-augmented Adversarial Networks). We compare against AAN-NA and AAN-NR that are our model variants without adversarial training and without aspectrelevance scoring respectively. Finally we include supervised models trained on the full set of In-Domain annotations as the performance upper bound. Table 3 summarizes the usage of labeled and unlabeled data in each domain as well as keyword rules by our model (AAN-Full) and different baselines. Note that our model assumes the same set of data as the AAN-NA, AAN-NR and mSDA methods.", "cite_spans": [], "ref_spans": [ { "start": 377, "end": 384, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baselines and our method", "sec_num": null }, { "text": "Following prior work (Ganin and Lempitsky, 2014) , we gradually increase the adversarial strength \u03c1 and decay the learning rate during training. We also apply batch normalization (Ioffe and Szegedy, 2015) on the sentence encoder and apply dropout with a ratio of 0.2 on word embeddings and each hidden layer activation. We set the hidden layer size to 150 and pick the transformation regularization weight \u03bb t = 0.1 for the pathol- Table 4 : Pathology: Classification accuracy (%) of different approaches on the pathology reports dataset, including the results of twelve adaptation scenarios from four different aspects (IDC, ALH, DCIS and LCIS) in breast cancer pathology reports. \"mSDA\" indicates the marginalized denoising autoencoder in (Chen et al., 2012) . \"AAN-NA\" and \"AAN-NR\" corresponds to our model without the adversarial training and the aspect-relevance scoring component, respectively. We also include in the last column the in-domain supervised training results of our model as the performance upper bound. Boldface numbers indicate the best accuracy for each testing scenario.", "cite_spans": [ { "start": 21, "end": 48, "text": "(Ganin and Lempitsky, 2014)", "ref_id": "BIBREF15" }, { "start": 741, "end": 760, "text": "(Chen et al., 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 432, "end": 439, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Implementation details", "sec_num": null }, { "text": "ogy dataset and \u03bb t = 10.0 for the review dataset. Table 4 summarizes the classification accuracy of different methods on the pathology dataset, including the results of twelve adaptation tasks. Our full model (AAN-Full) consistently achieves the best performance on each task compared with other baselines and model variants. It is not surprising that SVM and mSDA perform poorly on this dataset because they only predict labels based on an overall feature representation of the input, and do not utilize weak supervision provided by aspect-specific keywords. As a reference, we also provide a performance upper bound by training our model on the full labeled set in the target domain, denoted as In-Domain in the last column of Table 4 . On average, the accuracy of our model (AAN-Full) is only 5.7% behind this upper bound. Table 5 shows the adaptation results from each aspect in the hotel reviews to the overall ratings of restaurant reviews. AAN-Full and AAN-NR are the two best performing systems on this review dataset, attaining around 5% improvement over the mSDA baseline. Below, we summarize our findings when comparing the full model with the two model variants AAN-NA and AAN-NR.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 4", "ref_id": null }, { "start": 730, "end": 737, "text": "Table 4", "ref_id": null }, { "start": 827, "end": 834, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Implementation details", "sec_num": null }, { "text": "We first focus on comparisons between AAN-Full and AAN-NA. The only difference between the two models is that AAN-NA has no adversarial training. On the pathology dataset, our model significantly outperforms AAN-NA, yielding a 20.2% absolute average gain (see Table 4 ). On the review dataset, our model obtains 2.5% average improvement over AAN-NA. As shown in Table 5 , the gains are more significant when training on room and checkin aspects, reaching 6.9% and 4.5%, respectively.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 267, "text": "Table 4", "ref_id": null }, { "start": 362, "end": 369, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Impact of adversarial training", "sec_num": null }, { "text": "As shown in Table 4, the relevance scoring component plays a crucial role in classification on the pathology dataset. Table 5 : Review: Classification accuracy (%) of different approaches on the reviews dataset. Columns have the same meaning as in Table 4 . Boldface numbers indicate the best accuracy for each testing scenario. Our model achieves more than 27% improvement over AAN-NR. This is because, in general, aspects have zero correlations to each other in pathology reports. Therefore, it is essential for the model to have the capacity of distinguishing across different aspects in order to succeed in this task.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 5", "ref_id": null }, { "start": 248, "end": 255, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Impact of relevance scoring", "sec_num": null }, { "text": "On the review dataset, however, we observe that relevance scoring has no significant impact on performance. On average, AAN-NR actually outperforms AAN-Full by 0.9%. This observation can be explained by the fact that different aspects in hotel reviews are highly correlated to each other. For example, the correlation between room quality and cleanliness is 0.81, much higher than aspect correlations in the pathology dataset. In other words, the sentiment is typically consistent across all sentences in a review, so that selecting aspect-specific sentences becomes unnecessary. Moreover, our supervision for the relevance scorer is weak and noisy because the aspect keywords are obtained in a semiautomatic way. Therefore, it is not surprising that AAN-NR sometimes delivers a better classification Table 6 : Impact of adding the reconstruction component in the model, measured by the average accuracy on each dataset. +REC. and -REC. denote the presence and absence of the reconstruction loss, respectively.", "cite_spans": [], "ref_spans": [ { "start": 801, "end": 808, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Impact of relevance scoring", "sec_num": null }, { "text": "accuracy than AAN-Full.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of relevance scoring", "sec_num": null }, { "text": "Impact of the reconstruction loss Table 6 summarizes the impact of the reconstruction loss on the model performance. For our full model (AAN-Full), adding the reconstruction loss yields an average of 5.0% gain on the pathology dataset and 5.6% on the review dataset.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 41, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "seasoned . much closer to bland than anything . \u2026 from above .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "\u2022 the room decor was not entirely modern . \u2026 we just had the run of the mill hotel room without a view .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "very ill with what was suspected to be food poison", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "\u2022 probably the noisiest room he could have given us in the whole hotel .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "7" }, { "text": "\u2022 the fries were undercooked and thrown haphazardly into the sauce holder . the shrimp was over cooked and just deepfried . \u2026 even the water tasted weird . \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "\u2022 the room was old . \u2026 we did n't like the night shows at all . \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "\u2022 however , the decor was just fair . \u2026 in the second bedroom it literally rained water from above .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "\u2022 rest room in this restaurant is very dirty . \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "\u2022 the only problem i had was that \u2026 i was very ill with what was suspected to be food poison Nearest Hotel Reviews by Ours-Full Nearest Hotel Reviews by Ours-NA Figure 5 : Examples of restaurant reviews and their nearest neighboring hotel reviews induced by different models (column 2 and 3). We use room quality as the source aspect. The sentiment phrases of each review are in blue, and some reviews are also shortened for space. To analyze the reasons behind this difference, consider Figure 4 that shows the heat maps of the learned document representations on the review dataset. The top half of the matrices corresponds to input documents from the source domain and the bottom half corresponds to the target domain. Unlike the first matrix, the other two matrices have no significant difference between the two halves, indicating that adversarial training helps learning of domain-invariant representations. However, adversarial training also removes a lot of information from representations, as the second matrix is much more sparse than the first one. The third matrix shows that adding reconstruction loss effectively addresses this sparsity issue. Almost 85% of the entries of the second matrix have small values (< 10 \u22126 ) while the sparsity is only about 30% for the third one. Moreover, the standard deviation of the third matrix is also ten times higher than the second one. These comparisons demonstrate that the reconstruction loss function improves both the richness and diversity of the learned representations. Note that in the case of no adversarial training (AAN-NA), adding the reconstruction component has no clear effect. This is expected because the main motivation of adding this component is to achieve a more robust adversarial training. Table 7 shows the averaged accuracy with differ-ent regularization weights \u03bb t in Equation 5. We change \u03bb t to reflect different model variants. First, \u03bb t = \u221e corresponds to the removal of the transformation layer because the transformation is always identity in this case. Our model performs better than this variant on both datasets, yielding an average improvement of 9.8% on the pathology dataset and 2.1% on the review dataset. This result indicates the importance of adding the transformation layer. Second, using zero regularization (\u03bb t = 0) also consistently results in inferior performance, such as 13.8% loss on the pathology dataset. We hypothesize that zero regularization will dilute the effect from reconstruction because there is too much flexibility in transformation. As a result, the transformed representation will become sparse due to the adversarial training, leading to a performance loss.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Figure 5", "ref_id": null }, { "start": 488, "end": 496, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 1767, "end": 1774, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "DATASET \u03bb t = 0 0 < \u03bb t < \u221e \u03bb t =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Restaurant Reviews", "sec_num": null }, { "text": "Finally, in Figure 5 we illustrate a case study on the characteristics of learned abstract representations by different models. The first column shows an example restaurant review. Sentiment phrases in this example are mostly food-specific, such as \"undercooked\" and \"tasted weird\". In the other two columns, we show example hotel reviews that are nearest neighbors to the restaurant reviews, measured by cosine similarity between their representations. In column 2, many sentiment phrases are specific for room quality, such as \"old\" and \"rained water from above\". In column 3, however, most sentiment phrases are either common sentiment expressions (e.g. dirty) or food-related (e.g. food poison), even though the focus of the reviews is based on the room quality of hotels. This observation indicates that adversarial training (AAN-Full) successfully learns to eliminate domain-specific information and to map those domain-specific words into similar domain-invariant Figure 6 : Classification accuracy (y-axis) on two transfer scenarios (one on review and one on pathology dataset) with a varied number of keyword rules for learning sentence relevance (x-axis).", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 5", "ref_id": null }, { "start": 971, "end": 979, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Examples of neighboring reviews", "sec_num": null }, { "text": "representations. In contrast, AAN-NA only captures domain-invariant features from phrases that commonly present in both domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Examples of neighboring reviews", "sec_num": null }, { "text": "Finally, Figure 6 shows the accuracy of our full model (y-axis) when trained with various amount of keyword rules for relevance learning (x-axis). As expected, the transfer accuracy drops significantly when using fewer rules on the pathology dataset (LCIS as source and ALH as target). In contrary, the accuracy on the review dataset (hotel service as source and restaurant as target) is not sensitive to the amount of used relevance rules. This can be explained by the observation from Table 5 that the model without relevance scoring performs equally well as the full model due to the tight dependence in aspect labels.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 6", "ref_id": null }, { "start": 487, "end": 494, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Impact of keyword rules", "sec_num": null }, { "text": "In this paper, we propose a novel aspect-augmented adversarial network for cross-aspect and crossdomain adaptation tasks. Experimental results demonstrate that our approach successfully learns invariant representation from aspect-relevant fragments, yielding significant improvement over the mSDA baseline and our model variants. The effectiveness of our approach suggests the potential application of adversarial networks to a broader range of NLP tasks for improved representation learning, such as machine translation and language generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "The code is available at https://github.com/ yuanzh/aspect_adversarial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This alignment or invariance is enforced on the level of sets, not individual reports; aspect-driven encoding of any specific report should remain substantially different for the two tasks since the encoded examples are passed on to the same classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "r a i = 1 if the sentence contains keywords pertaining to both aspect a and other aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This process is omitted inFigure 2for brevity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The restaurant portion of https://www.yelp.com/ dataset_challenge.6 https://www.tripadvisor.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the publicly available implementation provided by the authors at http://www.cse.wustl.edu/\u02dcmchen/ code/mSDA.tar. We use the hyper-parameters from the authors and their models have more parameters than ours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors acknowledge the support of the U.S. Army Research Office under grant number W911NF-10-1-0533. We thank the MIT NLP group, the TACL action editor Hal Daum\u00e9 III and the anonymous reviewers for their comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the ICLR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL", "volume": "7", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, Fernando Pereira, et al. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the ACL, volume 7, pages 440-447.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Joint learning of words and meaning representations for open-text semantic parsing", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the AISTATS", "volume": "22", "issue": "", "pages": "127--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. Joint learning of words and meaning representations for open-text semantic pars- ing. In Proceedings of the AISTATS, volume 22, pages 127-135.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Domain separation networks", "authors": [ { "first": "Konstantinos", "middle": [], "last": "Bousmalis", "suffix": "" }, { "first": "George", "middle": [], "last": "Trigeorgis", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Silberman", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Krishnan", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konstantinos Bousmalis, George Trigeorgis, Nathan Sil- berman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems (NIPS).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "XRCE at SemEval-2016 task 5: Feedbacked ensemble modeling on syntactico-semantic knowledge for aspect based sentiment analysis", "authors": [ { "first": "Caroline", "middle": [], "last": "Brun", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Perez", "suffix": "" }, { "first": "Claude", "middle": [], "last": "Roux", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "277--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caroline Brun, Julien Perez, and Claude Roux. 2016. XRCE at SemEval-2016 task 5: Feedbacked ensem- ble modeling on syntactico-semantic knowledge for aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 277-281.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The feasibility of using natural language processing to extract clinical information from breast pathology reports", "authors": [ { "first": "Elizabeth", "middle": [ "Mh" ], "last": "Belli", "suffix": "" }, { "first": "Judy", "middle": [ "E" ], "last": "Kim", "suffix": "" }, { "first": "Barbara", "middle": [ "L" ], "last": "Garber", "suffix": "" }, { "first": "Michele", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Gadd", "suffix": "" } ], "year": 2012, "venue": "Journal of pathology informatics", "volume": "3", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Belli, Elizabeth MH. Kim, Judy E. Garber, Barbara L. Smith, Michele A. Gadd, et al. 2012. The feasibility of using natural language processing to extract clinical information from breast pathology reports. Journal of pathology informatics, 3(1):23.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multitask learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1998, "venue": "Learning to learn", "volume": "", "issue": "", "pages": "95--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95-133. Springer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Guiding semi-supervision with constraintdriven learning", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007. Guiding semi-supervision with constraint- driven learning. In Proceedings of the ACL, vol- ume 45, page 280.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Marginalized denoising autoencoders for domain adaptation", "authors": [ { "first": "Minmin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhixiang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the ICML.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ABC-CNN: An attention based convolutional neural network for visual question answering", "authors": [ { "first": "Kan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Liang-Chieh", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Haoyuan", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ram", "middle": [], "last": "Nevatia", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.05960v2" ] }, "num": null, "urls": [], "raw_text": "Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. ABC- CNN: An attention based convolutional neural net- work for visual question answering. arXiv preprint arXiv:1511.05960v2.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Long short-term memory-networks for machine reading", "authors": [ { "first": "Jianpeng", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine read- ing. In Proceedings of the EMNLP.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "DLID: Deep learning for domain adaptation by interpolating between domains", "authors": [ { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Suhrid", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Raghuraman", "middle": [], "last": "Gopalan", "suffix": "" } ], "year": 2013, "venue": "ICML Workshop on Challenges in Representation Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumit Chopra, Suhrid Balakrishnan, and Raghuraman Gopalan. 2013. DLID: Deep learning for do- main adaptation by interpolating between domains. In ICML Workshop on Challenges in Representation Learning.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceed- ings of the 25th International Conference on Machine Learning, pages 160-167. ACM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised learning for lexicon-based classification", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the National Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein. 2017. Unsupervised learning for lexicon-based classification. In Proceedings of the Na- tional Conference on Artificial Intelligence (AAAI).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised domain adaptation by backpropagation", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin and Victor Lempitsky. 2014. Unsuper- vised domain adaptation by backpropagation. In Pro- ceedings of the ICML.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Domainadversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2015, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky. 2015. Domain- adversarial training of neural networks. Journal of Machine Learning Research.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Domain adaptation for large-scale sentiment classification: A deep learning approach", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)", "volume": "", "issue": "", "pages": "513--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceed- ings of the 28th International Conference on Machine Learning (ICML-11), pages 513-520.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Generative adversarial nets", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "David", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2672--2680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative ad- versarial nets. In Advances in Neural Information Pro- cessing Systems, pages 2672-2680.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Unsupervised learning of field segmentation models for information extraction", "authors": [ { "first": "Trond", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "371--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trond Grenager, Dan Klein, and Christopher D. Man- ning. 2005. Unsupervised learning of field segmen- tation models for information extraction. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 371-378. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Prototype-driven learning for sequence models", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "320--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics, pages 320- 327. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "authors": [ { "first": "Sergey", "middle": [], "last": "Ioffe", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by re- ducing internal covariate shift. In Proceedings of the ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Incorporating lexical priors into topic models", "authors": [ { "first": "Jagadeesh", "middle": [], "last": "Jagarlamudi", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Raghavendra", "middle": [], "last": "Udupa", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "204--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jagadeesh Jagarlamudi, Hal Daum\u00e9 III, and Raghaven- dra Udupa. 2012. Incorporating lexical priors into topic models. In Proceedings of the 13th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 204-213. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Rationalizing neural predictions", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the EMNLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Wikily supervised part-of-speech tagging", "authors": [ { "first": "Shen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Joao", "middle": [ "V" ], "last": "Gra\u00e7a", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1389--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shen Li, Joao V. Gra\u00e7a, and Ben Taskar. 2012. Wiki- ly supervised part-of-speech tagging. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1389-1398. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the HLT-NAACL", "volume": "", "issue": "", "pages": "912--921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for se- mantic classification and information retrieval. In Pro- ceedings of the HLT-NAACL, pages 912-921.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Unsupervised domain adaptation with residual transfer networks", "authors": [ { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Han", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "136--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2016. Unsupervised domain adap- tation with residual transfer networks. In Advances in Neural Information Processing Systems, pages 136- 144.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Navdeep Jaitly, and Ian Goodfellow. 2015. Adversarial autoencoders", "authors": [ { "first": "Alireza", "middle": [], "last": "Makhzani", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.05644v2" ] }, "num": null, "urls": [], "raw_text": "Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. 2015. Adversarial autoencoders. arXiv preprint arXiv:1511.05644v2.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data", "authors": [ { "first": "Gideon", "middle": [ "S" ], "last": "Mann", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Journal of machine learning research", "volume": "11", "issue": "", "pages": "955--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon S. Mann and Andrew McCallum. 2010. General- ized expectation criteria for semi-supervised learning with weakly labeled data. Journal of machine learn- ing research, 11:955-984.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials", "authors": [ { "first": "Iain", "middle": [ "J" ], "last": "Marshall", "suffix": "" }, { "first": "Jo\u00ebl", "middle": [], "last": "Kuiper", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2015, "venue": "Journal of the American Medical Informatics Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iain J. Marshall, Jo\u00ebl Kuiper, and Byron C. Wallace. 2015. RobotReviewer: evaluation of a system for au- tomatically assessing bias in clinical trials. Journal of the American Medical Informatics Association.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Ram\u00f3n", "middle": [], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Fernandez Astudillo", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F.T. Martins and Ram\u00f3n Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A survey on transfer learning", "authors": [ { "first": "Qiang", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "22", "issue": "10", "pages": "1345--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Metz", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised representation learning with deep con- volutional generative adversarial networks. In Pro- ceedings of the ICLR.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning transferrable representations for unsupervised domain adaptation", "authors": [ { "first": "Ozan", "middle": [], "last": "Sener", "suffix": "" }, { "first": "Hyun", "middle": [ "Oh" ], "last": "Song", "suffix": "" }, { "first": "Ashutosh", "middle": [], "last": "Saxena", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Savarese", "suffix": "" } ], "year": 2016, "venue": "Advances In Neural Information Processing Systems", "volume": "", "issue": "", "pages": "2110--2118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozan Sener, Hyun Oh Song, Ashutosh Saxena, and Sil- vio Savarese. 2016. Learning transferrable repre- sentations for unsupervised domain adaptation. In Advances In Neural Information Processing Systems, pages 2110-2118.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Adversarial multi-task learning of deep neural networks for robust speech recognition", "authors": [ { "first": "Yusuke", "middle": [], "last": "Shinohara", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "2369--2372", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Shinohara. 2016. Adversarial multi-task learn- ing of deep neural networks for robust speech recogni- tion. Interspeech 2016, pages 2369-2372.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Unsupervised and semisupervised learning with categorical generative adversarial networks", "authors": [ { "first": "Jost", "middle": [], "last": "Tobias Springenberg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1511.06390v2" ] }, "num": null, "urls": [], "raw_text": "Jost Tobias Springenberg. 2015. Unsupervised and semi- supervised learning with categorical generative adver- sarial networks. arXiv preprint arXiv:1511.06390v2.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Unsupervised cross-domain image generation", "authors": [ { "first": "Yaniv", "middle": [], "last": "Taigman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Polyak", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.02200" ] }, "num": null, "urls": [], "raw_text": "Yaniv Taigman, Adam Polyak, and Lior Wolf. 2016. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Deep domain confusion: Maximizing for domain invariance", "authors": [ { "first": "Eric", "middle": [], "last": "Tzeng", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.3474" ] }, "num": null, "urls": [], "raw_text": "Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Latent aspect rating analysis on review text data: a rating regression approach", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "783--792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rat- ing regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 783-792. ACM.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Latent aspect rating analysis without aspect keyword supervision", "authors": [ { "first": "Hongning", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "618--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect key- word supervision. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 618-626. ACM.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Ask, attend and answer: Exploring question-guided spatial attention for visual question answering", "authors": [ { "first": "Huijuan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ECCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In Proceedings of the ECCV.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In Proceedings of the ICML, page 5.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Using machine learning to parse breast pathology reports", "authors": [ { "first": "Adam", "middle": [], "last": "Yala", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Salama", "suffix": "" }, { "first": "Molly", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Sollender", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Bardia", "suffix": "" }, { "first": "Constance", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "Julliette", "middle": [ "M" ], "last": "Buckley", "suffix": "" }, { "first": "Suzanne", "middle": [ "B" ], "last": "Coopey", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Polubriaginof", "suffix": "" }, { "first": "Judy", "middle": [ "E" ], "last": "Garber", "suffix": "" }, { "first": "Barbara", "middle": [ "L" ], "last": "Smith", "suffix": "" }, { "first": "Michele", "middle": [ "A" ], "last": "Gadd", "suffix": "" }, { "first": "Michelle", "middle": [ "C" ], "last": "Specht", "suffix": "" }, { "first": "Thomas", "middle": [ "M" ], "last": "Gudewicz", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Guidi", "suffix": "" }, { "first": "Alphonse", "middle": [], "last": "Taghian", "suffix": "" }, { "first": "Kevin", "middle": [ "S" ], "last": "Hughes", "suffix": "" } ], "year": 2016, "venue": "Breast Cancer Research and Treatment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Yala, Regina Barzilay, Laura Salama, Molly Griffin, Grace Sollender, Aditya Bardia, Constance Lehman, Julliette M. Buckley, Suzanne B. Coopey, Fernanda Polubriaginof, Judy E. Garber, Barbara L. Smith, Michele A. Gadd, Michelle C. Specht, Thomas M. Gudewicz, Anthony Guidi, Alphonse Taghian, and Kevin S. Hughes. 2016. Using machine learning to parse breast pathology reports. Breast Can- cer Research and Treatment.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Stacked attention networks for image question answering", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the Con- ference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Using \"annotator rationales\" to improve machine learning for text categorization", "authors": [ { "first": "Omar", "middle": [], "last": "Zaidan", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Christine", "middle": [ "D" ], "last": "Piatko", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the HLT-NAACL", "volume": "", "issue": "", "pages": "260--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Zaidan, Jason Eisner, and Christine D. Piatko. 2007. Using \"annotator rationales\" to improve ma- chine learning for text categorization. In Proceedings of the HLT-NAACL, pages 260-267. Citeseer.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Rationale-augmented convolutional neural networks for text classification", "authors": [ { "first": "Ye", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ye Zhang, Iain Marshall, and Byron C. Wallace. 2016. Rationale-augmented convolutional neural networks for text classification. In Proceedings of the EMNLP.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Bi-transferring deep neural networks for domain adaptation", "authors": [ { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhiwen", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Jimmy", "middle": [ "Xiangji" ], "last": "Huang", "suffix": "" }, { "first": "Tingting", "middle": [], "last": "He", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangyou Zhou, Zhiwen Xie, Jimmy Xiangji Huang, and Tingting He. 2016. Bi-transferring deep neural net- works for domain adaptation. In Proceedings of the ACL.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": ": BREAST (LEFT) \u2026 Invasive ductal carcinoma: identified. Carcinoma tumor size: num cm. Grade: 3. \u2026 Lymphatic vessel invasion: identified. Blood vessel invasion: Suspicious. Margin of invasive carcinoma \u2026 Diagnosis results: Source (IDC): Positive Target (LVI): Positive" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Aspect-augmented adversarial network for transfer learning. The model is composed of (a) an aspect-driven document encoder, (b) a label predictor and (c) a domain classifier." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Heat map of 150 \u00d7 150 matrices. Each row corresponds to the vector representation of a document that comes from either the source domain (top half) or the target domain (bottom half). Models are trained on the review dataset when room quality is the source aspect." }, "TABREF1": { "content": "
ASPECT KEYWORDS
IDCIDC, Invasive Ductal Carcinoma
ALHALH, Atypical Lobular Hyperplasia
", "text": "Statistics of the pathology reports dataset and the reviews dataset that we use for training. Our model utilizes both labeled and unlabeled data.", "type_str": "table", "html": null, "num": null }, "TABREF2": { "content": "
: Examples of aspects and their correspond-
ing keywords (case insensitive) in the pathology
dataset.
", "text": "", "type_str": "table", "html": null, "num": null }, "TABREF3": { "content": "
: Usage of labeled (Lab.), unlabeled (Un-
lab.) data and keyword rules in each domain by our
model and other baseline methods. AAN-* denote
our model and its variants.
", "text": "", "type_str": "table", "html": null, "num": null }, "TABREF8": { "content": "", "text": "The effect of regularization of the transformation layer \u03bb t on the performance.", "type_str": "table", "html": null, "num": null } } } }