ACL-OCL / Base_JSON /prefixA /json /adaptnlp /2021.adaptnlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
143 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:01.672641Z"
},
"title": "Conditional Adversarial Networks for Multi-Domain Text Classification",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carleton University",
"location": {
"settlement": "Ottawa",
"region": "Ontario",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Ottawa",
"location": {
"settlement": "Ottawa",
"region": "Ontario",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Roby",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carleton University",
"location": {
"settlement": "Ottawa",
"region": "Ontario",
"country": "Canada"
}
},
"email": "ahmed.elroby@carleton.cadiana.inkpen@uottawa.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose conditional adversarial networks (CANs), a framework that explores the relationship between the shared features and the label predictions to impose stronger discriminability to the learned features, for multi-domain text classification (MDTC). The proposed CAN introduces a conditional domain discriminator to model the domain variance in both the shared feature representations and the class-aware information simultaneously, and adopts entropy conditioning to guarantee the transferability of the shared features. We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions. Therefore, CAN is a theoretically sound adversarial network that discriminates over multiple distributions. Evaluation results on two MDTC benchmarks show that CAN outperforms prior methods. Further experiments demonstrate that CAN has a good ability to generalize learned knowledge to unseen domains.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose conditional adversarial networks (CANs), a framework that explores the relationship between the shared features and the label predictions to impose stronger discriminability to the learned features, for multi-domain text classification (MDTC). The proposed CAN introduces a conditional domain discriminator to model the domain variance in both the shared feature representations and the class-aware information simultaneously, and adopts entropy conditioning to guarantee the transferability of the shared features. We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions. Therefore, CAN is a theoretically sound adversarial network that discriminates over multiple distributions. Evaluation results on two MDTC benchmarks show that CAN outperforms prior methods. Further experiments demonstrate that CAN has a good ability to generalize learned knowledge to unseen domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text classification is a fundamental task in Natural Language Processing (NLP) and has received constant attention due to its wide applications, ranging from spam detection to social media analytics (Pang et al., 2002; Hu and Liu, 2004; Choi and Cardie, 2008; Socher et al., 2012; Vo and Zhang, 2015) . Over the past couple of decades, supervised machine learning methods have shown dominant performance for text classification, such as Naive Bayes Classifiers (Troussas et al., 2013) , Support Vector Machines (Li et al., 2018) and Neural Networks . In particular, with the advent of deep learning, neural network-based text classification models have gained impressive achievements. However, text classification is well known to be highly domain-dependent, the same word could convey different sentiment polarities in different domains (Glorot et al., 2011) . For example, the word infantile expresses neutral sentiment in baby product review (e.g., The infantile cart is easy to use), while in book review, it often indicates a negative polarity (e.g., This book is infantile and boring). Thus a text classifier trained on one domain is likely to make spurious predictions on another domain whose distribution is different from the training data distribution. In addition, it is always difficult to collect sufficient labeled data for all interested domains. Therefore, it is of great significance to explore how to leverage available resources from related domains to improve the classification accuracy on the target domain.",
"cite_spans": [
{
"start": 199,
"end": 218,
"text": "(Pang et al., 2002;",
"ref_id": "BIBREF23"
},
{
"start": 219,
"end": 236,
"text": "Hu and Liu, 2004;",
"ref_id": "BIBREF14"
},
{
"start": 237,
"end": 259,
"text": "Choi and Cardie, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 260,
"end": 280,
"text": "Socher et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 281,
"end": 300,
"text": "Vo and Zhang, 2015)",
"ref_id": "BIBREF28"
},
{
"start": 461,
"end": 484,
"text": "(Troussas et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 511,
"end": 528,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 838,
"end": 859,
"text": "(Glorot et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The major line of approaches to tackle the above problem is multi-domain text classification (MDTC) (Li and Zong, 2008) , which can handle the scenario where labeled data exist for multiple domains, but in insufficient amounts to training an effective classifier. Deep learning models have yielded impressive performance in MDTC (Wu and Guo, 2020; Wu et al., 2021) . Most recent MDTC methods adopt the shared-private paradigm, which divides the latent space into two types: one is the shared feature space for all domains with the aim of capturing domain-invariant knowledge, the other one is the private feature space for each domain which extracts domain-specific knowledge. To explicitly ensure the optimum separations among the shared latent space and multiple domain-specific feature spaces, the adversarial training (Goodfellow et al., 2014) is introduced in MDTC. By employing the adversarial training, the domain-specific features can be prevented from creeping into the shared latent space, which will lead to feature redundancy (Liu et al., 2017) . In adversarial training, a multinomial domain discriminator is trained against a shared feature extractor to minimize the divergences across different domains. When the domain discriminator and the shared feature extractor reach equilibrium, the learned shared features can be regarded as domain-invariant and used for the subsequent classification. The adversarial training-based MDTC approaches yield the stateof-the-art results (Liu et al., 2017; Chen and Cardie, 2018) . However, these methods still have a significant limitation: when the data distributions present complex structures, adversarial training may fail to perform global alignment among domains. Such a risk comes from the challenge that in adversarial training, only aligning the marginal distributions can not sufficiently guarantee the discriminability of the learned features. The features with different labels may be aligned, as shown in Figure 1 . The critical mismatch can lead to weak discriminability of the learned features. In this paper, motivated by the conditional generative adversarial networks (CGANs), which aligns distributions of real and generated images via conditioning the generator and discriminator on extra information (Mirza and Osindero, 2014) , we propose conditional adversarial networks (CANs) to address the aforementioned challenge. The CAN method introduces a conditional domain discriminator that models domain variance in both shared features and label predictions, exploring the relationship between shared feature representations and class-aware information conveyed by label predictions to encourage the shared feature extractor to capture more discriminative information. Moreover, we use entropy conditioning to avoid the risk of conditioning on the class-aware informa-tion with low certainty. The entropy conditioning strategy can give higher priority to easy-to-transfer instances. We also provide a theoretical analysis demonstrating the validity of CANs. Our approach adopts the shared-private paradigm. We validate the effectiveness of CAN on two MDTC benchmarks. It can be noted that CAN outperforms the state-of-the-art methods for both datasets. Finally, we empirically illustrate that CAN has the ability to generalize in cases where no labeled data exist for a subset of domains. The contributions of our work are listed as follows:",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Li and Zong, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 329,
"end": 347,
"text": "(Wu and Guo, 2020;",
"ref_id": "BIBREF31"
},
{
"start": 348,
"end": 364,
"text": "Wu et al., 2021)",
"ref_id": "BIBREF33"
},
{
"start": 822,
"end": 847,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 1038,
"end": 1056,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1490,
"end": 1508,
"text": "(Liu et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 1509,
"end": 1531,
"text": "Chen and Cardie, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 2274,
"end": 2300,
"text": "(Mirza and Osindero, 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1971,
"end": 1979,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose conditional adversarial networks (CANs) for multi-domain text classification which incorporate conditional domain discriminator and entropy conditioning to perform alignment on the joint distributions of shared features and label predictions to improve the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present the theoretical analysis of the CAN framework, demonstrating that CANs are minimizers of divergences among multiple joint distributions of shared features and label predictions, and providing the condition where the conditional domain discriminator reaches its optimum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We evaluate the effectiveness of CAN on two MDTC benchmarks. The experimental results show that CAN yields state-of-the-art results. Moreover, further experiments on unsupervised multi-source domain adaptation demonstrate that CAN has a good capacity to generalize to unseen domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-domain text classification (MDTC) was first proposed by (Li and Zong, 2008) , aiming to simultaneously leverage all existing resources across different domains to improve system performance. Currently, there are two main streams for MDTC: one strand exploits covariance matrix to model the relationship across domains (Dredze and Crammer, 2008; Saha et al., 2011; Zhang and Yeung, 2012) ; the other strand is based on neural networks, sharing the first several layers for each domain to extract low-level features and generating outputs with domain-specific parameters. The multi-task convolutional neural network (MT-CNN) utilizes a convolutional layer in which only the lookup table is shared for better word embeddings (Collobert and Weston, 2008) . The collaborative multi-domain sentiment classification (CMSC) combines a classifier that learns common knowledge among domains with a set of classifiers, one per domain, each of which captures domain-dependent features to make the final predictions (Wu and Huang, 2015 ). The multi-task deep neural network (MT-DNN) maps arbitrary text queries and documents into semantic vector representations in a low dimensional latent space and combines tasks as disparate as operations necessary for classification (Liu et al., 2015) .",
"cite_spans": [
{
"start": 62,
"end": 81,
"text": "(Li and Zong, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 324,
"end": 350,
"text": "(Dredze and Crammer, 2008;",
"ref_id": "BIBREF8"
},
{
"start": 351,
"end": 369,
"text": "Saha et al., 2011;",
"ref_id": "BIBREF24"
},
{
"start": 370,
"end": 392,
"text": "Zhang and Yeung, 2012)",
"ref_id": "BIBREF36"
},
{
"start": 728,
"end": 756,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 1009,
"end": 1028,
"text": "(Wu and Huang, 2015",
"ref_id": "BIBREF30"
},
{
"start": 1264,
"end": 1282,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pioneered by the generative adversarial network (GAN) (Goodfellow et al., 2014) , adversarial learning has been firstly proposed for image generation. (Ganin et al., 2016) applies adversarial learning in domain adaptation to extract domain-invariant features across two different distributions (binary adversarial learning). (Zhao et al., 2017) extends it to multiple adversarial learning, enabling the model to learn domain-invariant representations across multiple domains. However, only considering domaininvariant features can not provide optimal solutions for MDTC, because domain-specific information also plays an important role in training an effective classifier. (Bousmalis et al., 2016) proposes the shared-private paradigm to combine domaininvariant features with domain-specific ones to perform classification, illustrating that this scheme can improve system performance. To date, many state-of-the-art MDTC models adopt the adversarial learning and the shared-private paradigm. The adversarial multi-task learning for text classification (ASP-MTL) utilizes long short-term memory (LSTM) without attention as feature extractors and introduces orthogonality constraints to encourage the shared and private feature extractors to encode different aspects of the inputs (Liu et al., 2017) . The multinomial adversarial network (MAN) exploits two forms of loss functions to train the domain discriminator: the least square loss (MAN-L2) and negative log-likelihood loss (MAN-NLL) (Chen and Cardie, 2018). The multi-task learning with bidirectional language models for text classification (MT-BL) introduces language modeling as an auxiliary task to encourage the domainspecific feature extractors to capture more syntactic and semantic information, and a uniform label distribution-based loss constraint to the shared feature extractor to enhance the ability to learn domain-invariant features (Yang and Shang, 2019) .",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 151,
"end": 171,
"text": "(Ganin et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 325,
"end": 344,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 673,
"end": 697,
"text": "(Bousmalis et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1280,
"end": 1298,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1903,
"end": 1925,
"text": "(Yang and Shang, 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Adversarial learning has several advantages, such as Markov chains are not needed and no inference is required during learning (Mirza and Osindero, 2014) . However, there still exists an issue in adversarial learning. When data distributions embody complex structures, adversarial learning can fail in performing the global alignment. The conditional generative adversarial network (CGAN) is proposed to address this problem (Mirza and Osindero, 2014) . In CGAN, both the generator and discriminator are conditioned on some extra information, such as labels or data from other modalities, to yield better results. The conditional adversarial training mechanism has been explored in transfer learning. The conditional domain adversarial networks (CDANs) condition the domain discriminator on a multilinear map of feature representations and category predictions so as to enable discriminative alignment of multi-mode structures (Long et al., 2018) . The conditional generative adversarial networks for structured domain adaptation learn a conditional generator to transform the feature maps of source domain images as if they were extracted from the target domain, and a discriminator to encourage realistic transformations for the semantic segmentation of urban scenes (Hong et al., 2018) . Sharing some spirit of CGAN, this paper extends conditional adversarial learning in MDTC, enabling a domain discriminator on the shared features while conditioning it on the class-aware information conveyed by the label predictions. Moreover, in order to guarantee the generalizability of the learned features, we also utilize the entropy conditioning strategy.",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "(Mirza and Osindero, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 425,
"end": 451,
"text": "(Mirza and Osindero, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 927,
"end": 946,
"text": "(Long et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 1269,
"end": 1288,
"text": "(Hong et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we consider MDTC tasks in the following setting. Assume there exist M domains",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "{D i } M i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "For each domain, both labeled and unlabeled samples are taken into consideration. Specifically, D i contains two parts: a limited amount of labeled samples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "L i = {(x j , y j )} l i j=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "; and a large amount of unlabeled samples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "U i = {x j } u i j=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "The challenge of MDTC lies in how to improve the system performance of mapping the input x to its corresponding label y by leveraging all available resources across different domains. The performance is measured as the average classification accuracy across the M domains. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "We propose conditional adversarial networks (CANs), as shown in Figure 2 , which adopt the shared-private scheme and consist of four components: a shared feature extractor F s , a set of domain-specific feature extractors",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 72,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "{F i d } M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": ", a conditional domain discriminator D, and a text classifier C. The shared feature extractor F s learns to capture domain-invariant features that are beneficial to classification across all domains, while each domain-specific feature extractor F i d aims to learn knowledge that is unique to its own domain. The architecture of these feature extractors are flexible and can be decided based on the practical task. For instance, it can adopt the form of a convolutional neural network (CNN), a recurrent neural network (RNN), or a multi-layer perceptron (MLP). Here, a feature extractor generates vectors with a fixed length, which is considered as the hidden representation of certain input. The classifier C takes the concatenation of a shared feature and a domainspecific feature as its input and outputs label probabilities. The conditional domain discriminator D takes the concatenation of a shared feature and the prediction of the given instance provided by C as its input and predicts the likelihood of that instance coming from each domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3.1"
},
{
"text": "Adversarial learning has been successfully investigated in minimizing divergences among domains (Chen and Cardie, 2018; Zhao et al., 2017) . In standard adversarial learning for MDTC, a twoplayer mini-max game is conducted between a domain discriminator and a shared feature extractor: the domain discriminator is trained to distinguish features across different domains, and the shared feature extractor aims to deceive the discriminator. By performing this mini-max optimization, the domain-invariant features can be learned. The error function of the domain discriminator corresponds well to the divergences among domains. Most MDTC methods align the marginal distributions. However, the transferability with representations transition from general to specific along deep networks is decreasing significantly (Yosinski et al., 2014) , only adapting the marginal distributions is not sufficient to guarantee the global alignment. In addition, when the data distributions embody complex structures, which is a real scenario for NLP applications, there is a high risk of failure by matching features with different labels.",
"cite_spans": [
{
"start": 96,
"end": 119,
"text": "(Chen and Cardie, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 120,
"end": 138,
"text": "Zhao et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 812,
"end": 835,
"text": "(Yosinski et al., 2014)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Recent advances in the conditional generative adversarial network (CGAN) disclose that better alignment on two different distributions can be obtained by conditioning the generator and discriminator on class-aware information (Mirza and Osindero, 2014) . The discriminative information provided by the label prediction potentially reveals the structure information underlying the data distribution. Thus, conditional adversarial learning can better model the divergences among domains on shared feature representations and label predictions. Unlike the prior works that adapting the marginal distributions (Liu et al., 2017; Chen and Cardie, 2018) , our proposed CAN framework is formalized on aligning joint distributions of shared features and label predictions. There exist two training flows in our model. Due to the nature of adversarial learning, the conditional domain dis-criminator is updated with a separate optimizer, while the other components of CAN are trained with the main optimizer. These two flows are supposed to complement each other. Denote L C and L D as the loss functions of the classifier C and the conditional domain discriminator D, respectively. We utilize the negative log-likelihood (NLL) loss to encode these two loss functions:",
"cite_spans": [
{
"start": 226,
"end": 252,
"text": "(Mirza and Osindero, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 606,
"end": 624,
"text": "(Liu et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 625,
"end": 647,
"text": "Chen and Cardie, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "L C ( y, y) = \u2212 log P ( y = y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L D ( d, d) = \u2212 log P ( d = d)",
"eq_num": "(2)"
}
],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "where y is the true label, y is the label prediction, d is the domain index and d is the domain label prediction. Therefore, we formulate CAN as a minimax optimization problem with two competitive terms defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J C = M i=1 E (x,y)\u223cL i [L C (C i , y)]",
"eq_num": "(3)"
}
],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "J D = M i=1 E x\u223cL i \u222aU i [L D (D([F s (x), C i ]), d)] (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "where [\u2022, \u2022] is the concatenation of two vectors,",
"cite_spans": [
{
"start": 6,
"end": 12,
"text": "[\u2022, \u2022]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "C i = C([F s (x), F i d (x)])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "is the prediction probability of the given instance x. C and D adopt MLPs with a softmax layer on top. For the domainspecific feature extractors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "{F i d } M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": ", the training is straightforward, as their objective is simple: help C perform better classification. While the shared feature extractor F s has two goals: (1) help C reduce prediction errors, and (2) confuse D to reach equilibrium.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Adversarial Training",
"sec_num": "3.2"
},
{
"text": "We condition the domain discriminator D on the joint variable (f, c) = (F s (x), C i ). For brevity, here we use f and c to denote F s (x) and C i , respectively. If we enforce different instances to have equal importance, the hard-to-transfer instances with uncertain predictions may deteriorate the system performance (Saito et al., 2019) . In order to alleviate the harmful effects introduced by the hardto-transfer instances, we introduce the entropy criterion E(c) = \u2212 2 k=1 [c k logc k ] to quantify the uncertainty of label predictions, where c k is the probability of predicting an instance to category k (negative: k = 1, positive: k = 2). By using the entropy conditioning, the easy-to-transfer instances with certain predictions are given higher priority. We reweigh these instances by an entropy-aware term: w(c) = 1 + e \u2212E(c) . Therefore, the improved J D is defined as:",
"cite_spans": [
{
"start": 320,
"end": 340,
"text": "(Saito et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "J E D = M i=1 E x\u223cL i \u222aU i [w(c)L D (D([F s (x), C i ]), d)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "(5) Therefore, the mini-max game of CAN is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min Fs,{F i d } M i=1 ,C max D J C + \u03bbJ E D",
"eq_num": "(6)"
}
],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "where \u03bb is a hyperparameter balancing the two objectives. The entropy conditioning empowers the entropy minimization principle (Grandvalet and Bengio, 2005) and controls the certainty of the predictions, enabling CAN have the ability to generalize on unseen domains with no labeled data. The CAN training is illustrated in Algorithm 1.",
"cite_spans": [
{
"start": 127,
"end": 156,
"text": "(Grandvalet and Bengio, 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "Algorithm 1 Stochastic gradient descent training algorithm 1: Input: labeled data L i and unlabeled data U i in M domains; a hyperparameter \u03bb. 2: for number of training iterations do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entropy Conditioning",
"sec_num": "3.3"
},
{
"text": "Sample labeled mini-batches from the multiple domains",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "B = {B 1 , \u2022 \u2022 \u2022 , B M }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "Sample unlabeled mini-batches from the multiple domains",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "B u = {B u 1 , \u2022 \u2022 \u2022 , B u M }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "Calculate loss = J C + \u03bbJ E D on B and B u ; Update F s , {F i d } M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": ", C by descending along the gradients \u2206loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4:",
"sec_num": null
},
{
"text": "Calculate l D = J E D on B and B u ; Update D by ascending along the gradients \u2206l D . 7: end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6:",
"sec_num": null
},
{
"text": "In this section, we present an analysis showing the validity of the CAN approach for MDTC. All proofs are given in the Appendix. The objective of CAN is equivalent to minimizing the total divergence among the M joint distributions. First, we define different joint distributions as P i (f, c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "P (f = F s (x), c = C i |x \u2208 D i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Combining L D with J D , the objective of D can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "J D = \u2212 M i=1 E (f,c)\u223cP i [log D i ([f, c])] (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "where D i ([f, c] ) yields the probability of the vector ([f, c]) coming from the i-th domain. We first derive that CAN could achieve its optimum if and only if all M joint distributions are identical.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "([f, c]",
"ref_id": null
}
],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Lemma 1. For any given F s , {F i d } M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "and C, the optimum conditional domain discriminator D * is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D * i ([f, c]) = P i (f, c) M j=1 P j (f, c)",
"eq_num": "(8)"
}
],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Then we provide the main theorem for the CAN framework:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Theorem 1. Let P (f, c) = M i=1 P i (f,c) M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": ", when D is trained to its optimum D * , we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "J D * = M log M \u2212 M i=1 KL(P i (f, c)|| P (f, c)) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "where KL(\u2022) is the Kullback-Leibler (KL) divergence (Aslam and Pavlu, 2007) of each joint distribution P i (f, c) to the centroid P (f, c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Finally, considering the non-negativity and convexity of the KL-divergence (Brillouin, 2013), we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Corollary 1. When D is trained to its optimum D * , J D * is M log M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "The optimum can be obtained if and only if P 1 (f, c) = P 2 (f, c) = ... = P M (f, c) = P (f, c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "Therefore, by using conditional adversarial training, we can train the conditional domain discriminator on the joint variable (f, c) to minimize the total divergence across different domains, yielding promising performance on MDTC tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theoretical Analysis",
"sec_num": "3.4"
},
{
"text": "We evaluate the effectiveness of the CAN model on both MDTC and unsupervised multi-source domain adaptation tasks. The former refers to the setting where the test data falls into one of the M domains, and the latter refers to the setting where the test data comes from an unseen domain without labels. Moreover, an ablation study is provided for further analysis of the CAN model. We conduct experiments on two MDTC benchmarks: the Amazon review dataset (Blitzer et al., 2007) and the FDU-MTL dataset (Liu et al., 2017) . The Amazon review dataset consists of four domains: books, DVDs, electronics, and kitchen. For each domain, there exist 2,000 instances: 1,000 positive ones and 1,000 negative ones. All data was pre-processed into a bag of features (unigrams and bigrams), losing all word order information.",
"cite_spans": [
{
"start": 454,
"end": 476,
"text": "(Blitzer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 501,
"end": 519,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our experiments, the 5,000 most frequent features are used, representing each review as a 5,000dimensional vector. The FDU-MTL dataset is a more complicated dataset, which contains 16 domains: books, electronics, DVDs, kitchen, apparel, camera, health, music, toys, video, baby, magazine, software, sport, IMDB, and MR. All data in the FDU-MTL dataset are raw text data, tokenized by the Stanford tokenizer. The detailed statistics of the FDU-MTL dataset are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 479,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Implementation Details All experiments are implemented by using PyTorch. The CAN has one hyperparameter: \u03bb, which is fixed as 1 in all experiments, the parameter sensitivity analysis is presented in the Appendix. We use Adam optimizer (Kingma and Ba, 2014) , with the learning rate 0.0001, for training. The batch size is 8. We adopt the same model architecture as in (Chen and Cardie, 2018) . For the Amazon Review dataset, MLPs are used as feature extractors, with an input size of 5,000. Each feature extractor is composed of two hidden layers, with size 1,000 and 500, respectively. The output size of the shared feature extractor is 128 while 64 for the domain-specific ones. The dropout rate is 0.4 for each component. Classifier and discriminator are MLPs with one hidden layer of the same size as their input (128 + 64 for classifier and 128+2 for discriminator). ReLU is used as the activation function. For the FDU-MTL dataset, CNN with a single convolutional layer is used as the feature extractor. It uses different kernel sizes (3, 4, 5), and the number of kernels is 200. The input of the convolutional layer is a 100-dimensional vector, obtained by using word2vec (Mikolov et al., 2013) , for each word in the input sequence.",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 368,
"end": 391,
"text": "(Chen and Cardie, 2018)",
"ref_id": "BIBREF5"
},
{
"start": 1179,
"end": 1201,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Comparison Methods We first conduct experiments of multi-domain text classification. The CAN model is compared with a number of state-ofthe-art methods, which are listed below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 MT-CNN: A CNN-based model which shares the lookup table across domains for better word embeddings (Collobert and Weston, 2008) .",
"cite_spans": [
{
"start": 100,
"end": 128,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 MT-DNN: The multi-task deep neural network model with bag-of-words input and MLPs, in which a hidden layer is shared (Liu et al., 2015) .",
"cite_spans": [
{
"start": 119,
"end": 137,
"text": "(Liu et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 CMSC-LS, CMSC-SVM, CMSC-Log: The collaborative multi-domain sentiment clas-sification method combines an overall classifier across domains and a set of domaindependent classifiers to make the final prediction. The models are trained on least square loss, hinge loss, and log loss, respectively (Wu and Huang, 2015) .",
"cite_spans": [
{
"start": 296,
"end": 316,
"text": "(Wu and Huang, 2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 ASP-MTL: The adversarial multi-task learning framework of text classification, which adopts the share-private scheme, adversarial learning, and orthogonality constraints (Liu et al., 2017) .",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 MAN-L2, MAN-NLL: The multinomial adversarial network for multi-domain text classification (Chen and Cardie, 2018) . This model uses two forms of loss functions to train the domain discriminator: least square loss and negative log-likelihood loss.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Chen and Cardie, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "\u2022 MT-BL: The multi-task learning with bidirectional language models for text classification, which adds language modeling and a uniform label distribution-based loss constraint to the domain-specific feature extractors and the shared feature extractor, respectively (Yang and Shang, 2019) .",
"cite_spans": [
{
"start": 266,
"end": 288,
"text": "(Yang and Shang, 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "All the comparison methods use the standard partitions of the datasets. Thus, we directly cite the results from (Chen and Cardie, 2018; Yang and Shang, 2019) for fair comparisons.",
"cite_spans": [
{
"start": 112,
"end": 135,
"text": "(Chen and Cardie, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 136,
"end": 157,
"text": "Yang and Shang, 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Domain Text Classification",
"sec_num": "4.2"
},
{
"text": "We conduct MDTC experiments following the setting of (Chen and Cardie, 2018): A 5fold cross-validation is implemented for the Amazon review dataset. All data is divided into 5 folds per domain: three of the five folds are used as the training set, one is the validation set, and the remaining one is treated as the test set. The 5-fold average test accuracy is reported. All reports are based on 5 runs. Table 2 and Table 3 show the experimental results on the Amazon review dataset and the FDU-MTL dataset, respectively. From Table 2 , it can be seen that our model yields state-of-the-art results not only for the average classification accuracy, but also on each individual domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 416,
"end": 423,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 527,
"end": 534,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "From the experimental results on the FDU-MTL dataset, reported in Table 3 , we can see that the CAN model obtains the best accuracies on 10 of 16 domains and achieves the best result in terms of the average classification accuracy. The experimental results on these two MDTC benchmarks illustrate the efficacy of our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "The CAN model adopts the conditional domain discriminator and entropy conditioning. In this section, we investigate how these two strategies impact the performance of our model on the Amazon review dataset. In particular, three ablation variants are evaluated: (1) CAN w/o C, the variant of the proposed CAN model without conditioning the domain discriminator on label predictions, which utilizes the standard domain discriminator and entropy conditioning; (2) CAN w/o E, the variant of the proposed CAN model without the entropy conditioning, which hence imposes equal importance to different instances; (3) CAN w/o CE, the variant of the proposed CAN model which only uses standard adversarial training for domain alignment. The results of the ablation study are shown in Table 4 , where we can see that all variants produce inferior results. Thus, it indicates that both strategies contribute to the CAN model.",
"cite_spans": [],
"ref_spans": [
{
"start": 774,
"end": 781,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": null
},
{
"text": "In the MDTC scenario, the model requires labeled training data from each domain. However, in realworld applications, many domains may have no labeled data at all. Therefore, it is important to evaluate the performance of MDTC models on unseen domains (Wright and Augenstein, 2020) .",
"cite_spans": [
{
"start": 251,
"end": 280,
"text": "(Wright and Augenstein, 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multi-Source Domain Adaptation",
"sec_num": "4.3"
},
{
"text": "In the unsupervised multi-source domain adaptation setting, we have multiple source domains with both labeled and unlabeled data and one target domain with only unlabeled data. The CAN has the ability to learn domain-invariant representations on unlabeled data, and thus it can be generalized to unseen domains. Since the target domain has no labeled data at all, the domain discriminator is updated only on unlabeled data in this setting. When conducting text classification on the target domain, we only feed the shared feature to C and set the domain-specific feature vector to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multi-Source Domain Adaptation",
"sec_num": "4.3"
},
{
"text": "We conduct the experiments on the Amazon review dataset. In the experiments, three of the four domains are regarded as the source domains, and the remaining one is used as the target one. The evaluations are conducted on the target domain. In order to validate CAN's effectiveness, we compare CAN with several domain-agnostic methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multi-Source Domain Adaptation",
"sec_num": "4.3"
},
{
"text": "(1) the MLP model; (2) the marginalized denoising autoencoder (mSDA) (Chen et al., 2012) ; (3) the domain adversarial neural network (DANN) (Ganin et al., 2016) . These methods ignore the differences among domains. And certain state-of-theart unsupervised multi-source domain adaptation methods: (4) the multi-source domain adaptation neural network (MDAN(H) and MDAN(S)) (Zhao et al., 2017) ; (5) the multinomial adversarial network (MAN-L2 and MAN-NLL) (Chen and Cardie, 2018) . When training the domain-agnostic methods, the data in the multiple source domains are combined together as a single source domain.",
"cite_spans": [
{
"start": 69,
"end": 88,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 140,
"end": 160,
"text": "(Ganin et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 372,
"end": 391,
"text": "(Zhao et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 455,
"end": 478,
"text": "(Chen and Cardie, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Multi-Source Domain Adaptation",
"sec_num": "4.3"
},
{
"text": "In Table 5 , we observe that the CAN model outperforms all the comparison methods on three out of four domains. In terms of the average classification accuracy, the CAN method achieves superior performance. This suggests that our model has a good ability to generalize on unseen domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Unsupervised Multi-Source Domain Adaptation",
"sec_num": "4.3"
},
{
"text": "In this paper, we propose conditional adversarial networks (CANs) for MDTC. This approach can perform alignment on joint distributions of shared features and label predictions to improve the system performance. The CAN approach adopts the shared-private paradigm, trains domain discriminator by conditioning it on discriminative information conveyed by the label predictions to encourage the shared feature extractor to capture more discriminative information, and exploits entropy conditioning to guarantee the transferability of the learned features. Experimental results on two MDTC benchmarks demonstrate that the CAN model can not only boost the average classification accuracy for MDTC but also promote the generalization ability when tackling unseen domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "D i ([f, c])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "is the probability of the vector ([f, c]) coming from the i-th domain. Therefore, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "M i=1 D i ([f, c]) = 1 (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Lemma 1. For any given F s , {F i d } M i=1 and C, the optimum conditional domain discriminator D * is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D * i ([f, c]) = P i (f, c) M j=1 P j (f, c)",
"eq_num": "(13)"
}
],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Proof. For any given F s ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "{F i d } M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "and C, the optimum",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "D * = arg min D J D = arg min D \u2212 M i=1 E (f,c)\u223cP i [log D i ([f, c])] = arg max D M i=1 (f,c) P i (f, c) log D i ([f, c])d(f, c) = arg max D (f,c) M i=1 P i (f, c) log D i ([f, c])d(f, c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Here, we utilize the Lagrangian Multiplier for D * under the condition (12). We have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(D 1 , ..., D M , \u03bb) = M i=1 P i log D i \u2212 \u03bb( M i=1 D i \u2212 1)",
"eq_num": "(14)"
}
],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Let \u2207L = 0, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "\u2207 D i M j=1 P j log D j \u2212 \u03bb\u2207 D i ( M j=1 D j \u2212 1) = 0 M i=1 D i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "From the two above equations, we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "D * i (f, c) = P i (f, c) M j=1 P j (f, c) (15) Theorem 1. Let P (f, c) = M i=1 P i (f,c) M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": ", when D is trained to its optimum D * , we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "J D * = M log M \u2212 M i=1 KL(P i (f, c)|| P (f, c)) (16)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "where KL(\u2022) is the Kullback-Leibler (KL) divergence of each joint distribution P i (f, c) to the centroid P (f, c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Proof. Let P (f, c) = M i=1 P i (f,c) M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": ". We have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "M i=1 KL(P i (f, c)|| P (f, c)) = M i=1 E (f,c)\u223cP i [log P i (f, c) P (f, c) ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "When D is updated to D * , we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "J D * = \u2212 M i=1 E (f,c)\u223cP i [log D * i ([f, c])] = \u2212 M i=1 E (f,c)\u223cP i [log P i (f, c) M j=1 P j (f, c) ] = \u2212 M i=1 E (f,c)\u223cP i [log P i (f, c) M j=1 P j (f, c) + log M ] + M log M = M log M \u2212 M i=1 E (f,c)\u223cP i [log P i (f, c) M j=1 P j (f,c) M ] = M log M \u2212 M i=1 E (f,c)\u223cP i [log P i (f, c) P (f, c) ] = M log M \u2212 M i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "KL(P i (f, c)|| P (f, c))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In our model, a mini-max game is implemented to achieve the optimum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min Fs,{F i d } M i=1 ,C max D J C + \u03bbJ D",
"eq_num": "(17)"
}
],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Therefore, by the non-negativity and convexity of the KL-divergence, we can have the corollary: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The proposed CAN approach has one hyperparameter \u03bb, which is used to balance J C and J E D . We conduct parameter sensitivity analysis on the Amazon review dataset. The \u03bb is evaluated in the range {0.0001, 0.001, 0.01, 0.1, 1.0, 5.0}. The experimental results are shown in Figure 3 . The average classification accuracies across the four domains are reported. It can be noted that from 0.0001 to 1.0, the performance increases with \u03bb increasing, the performance change is very small. Then the accuracy reaches the optimum at the point \u03bb = 1.0, while the further increase of \u03bb will dramatically deteriorate the performance. This suggests that the selection of \u03bb has an influence on the system performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 281,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Parameter Sensitivity Analysis",
"sec_num": null
}
],
"back_matter": [
{
"text": "A.1 Proofs for CAN Assume there exist M domains, for each domain D i , we have a joint distribution defined as:is the prediction probability of the given instance x, [\u2022, \u2022] is the concatenation of two vectors. The objective of D is to minimize J D :",
"cite_spans": [
{
"start": 166,
"end": 172,
"text": "[\u2022, \u2022]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Query hardness estimation using jensen-shannon divergence among multiple scoring functions",
"authors": [
{
"first": "A",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Virgil",
"middle": [],
"last": "Aslam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pavlu",
"suffix": ""
}
],
"year": 2007,
"venue": "European conference on information retrieval",
"volume": "",
"issue": "",
"pages": "198--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javed A Aslam and Virgil Pavlu. 2007. Query hardness estimation using jensen-shannon divergence among multiple scoring functions. In European conference on information retrieval, pages 198-209. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the annual meeting of the association of computational linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Blitzer, M. Dredze, and F. Pereira. 2007. Biogra- phies, bollywood, boom-boxes and blenders: Do- main adaptation for sentiment classification. In Proc. of the annual meeting of the association of computational linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Domain separation networks",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bousmalis",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Trigeorgis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Silberman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krish- nan, and D. Erhan. 2016. Domain separation net- works. In Advances in Neural Information Process- ing Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Science and information theory",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Brillouin",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Brillouin. 2013. Science and information theory. Courier Corporation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Marginalized denoising autoencoders for domain adaptation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "K",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sha",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of International Conference on International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Chen, Z. Xu, K.Q. Weinberger, and F. Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proc. of International Conference on International Conference on Machine Learning.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multinomial adversarial networks for multi-domain text classification",
"authors": [
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Chen and C. Cardie. 2018. Multinomial adversar- ial networks for multi-domain text classification. In Proc. of the Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with compositional semantics as structural inference for subsentential sentiment analysis. In Proceedings of the 2008 Conference on Empirical Methods in Natu- ral Language Processing, pages 793-801.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of the inter. conference on Machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Collobert and J. Weston. 2008. A unified architec- ture for natural language processing: Deep neural networks with multitask learning. In Proc. of the in- ter. conference on Machine learning.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Online methods for multi-domain learning and adaptation",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "689--697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dredze and Koby Crammer. 2008. Online meth- ods for multi-domain learning and adaptation. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing, pages 689- 697. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "The Journal of Machine Learning Research",
"volume": "17",
"issue": "1",
"pages": "2096--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Domain adaptation for large-scale sentiment classification: A deep learning approach",
"authors": [
{
"first": "X",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the international conference on machine learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Glorot, A. Bordes, and Y. Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proc. of the international conference on machine learning.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generative adversarial nets",
"authors": [
{
"first": "I",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information proc. systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Ben- gio. 2014. Generative adversarial nets. In Advances in neural information proc. systems.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semisupervised learning by entropy minimization",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Grandvalet",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "529--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Grandvalet and Yoshua Bengio. 2005. Semi- supervised learning by entropy minimization. In Advances in neural information processing systems, pages 529-536.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Conditional generative adversarial network for structured domain adaptation",
"authors": [
{
"first": "Weixiang",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Zhenzhen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Junsong",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1335--1344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weixiang Hong, Zhenzhen Wang, Ming Yang, and Jun- song Yuan. 2018. Conditional generative adversarial network for structured domain adaptation. In Pro- ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pages 1335-1344.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "D",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "D.P. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A new strategy to detect lung cancer on ct images",
"authors": [
{
"first": "Lingling",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)",
"volume": "",
"issue": "",
"pages": "716--722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lingling Li, Yuan Wu, Yi Yang, Lian Li, and Bin Wu. 2018. A new strategy to detect lung cancer on ct im- ages. In 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), pages 716- 722. IEEE.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multi-domain sentiment classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Li and C. Zong. 2008. Multi-domain sentiment clas- sification. In Proc. of the Annual Meeting of the As- sociation for Computational Linguistics on Human Language Technologies.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adversarial multi-task learning for text classification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Liu, X. Qiu, and X. Huang. 2017. Adversarial multi-task learning for text classification. In Proc. of the Annual Meeting of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Representation learning using multi-task deep neural networks for semantic classification and information retrieval",
"authors": [
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Y",
"middle": [
"Y"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.Y. Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and in- formation retrieval. In Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Conditional adversarial domain adaptation",
"authors": [
{
"first": "Mingsheng",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Zhangjie",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1647--1657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. 2018. Conditional adversarial do- main adaptation. In Proceedings of the 32nd Inter- national Conference on Neural Information Process- ing Systems, pages 1647-1657.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013. Efficient estimation of word representations in vec- tor space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Conditional generative adversarial nets",
"authors": [
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Osindero",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1411.1784"
]
},
"num": null,
"urls": [],
"raw_text": "Mehdi Mirza and Simon Osindero. 2014. Condi- tional generative adversarial nets. arXiv preprint arXiv:1411.1784.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79-86. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Online learning of multiple tasks and their relationships",
"authors": [
{
"first": "Avishek",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e3",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Venkatasubramanian",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "643--651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avishek Saha, Piyush Rai, Hal Daum\u00c3, Suresh Venkatasubramanian, et al. 2011. Online learning of multiple tasks and their relationships. In Proceed- ings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 643-651.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semi-supervised domain adaptation via minimax entropy",
"authors": [
{
"first": "Kuniaki",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Donghyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Sclaroff",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "8050--8058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko. 2019. Semi-supervised domain adaptation via minimax entropy. In Proceed- ings of the IEEE International Conference on Com- puter Vision, pages 8050-8058.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositional- ity through recursive matrix-vector spaces. In Pro- ceedings of the 2012 joint conference on empirical methods in natural language processing and com- putational natural language learning, pages 1201- 1211. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentiment analysis of facebook statuses using naive bayes classifier for language learning",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Troussas",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Virvou",
"suffix": ""
},
{
"first": "Kurt",
"middle": [
"Junshean"
],
"last": "Espinosa",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Llaguno",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Caro",
"suffix": ""
}
],
"year": 2013,
"venue": "IISA 2013",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Troussas, Maria Virvou, Kurt Junshean Es- pinosa, Kevin Llaguno, and Jaime Caro. 2013. Sen- timent analysis of facebook statuses using naive bayes classifier for language learning. In IISA 2013, pages 1-6. IEEE.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Target-dependent twitter sentiment classification with rich automatic features",
"authors": [
{
"first": "Duy-Tin",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Twenty-Fourth International Joint Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Transformer based multi-source domain adaptation",
"authors": [
{
"first": "Dustin",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7963--7974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dustin Wright and Isabelle Augenstein. 2020. Trans- former based multi-source domain adaptation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7963-7974.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Collaborative multidomain sentiment classification",
"authors": [
{
"first": "F",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Wu and Y. Huang. 2015. Collaborative multi- domain sentiment classification. In IEEE Interna- tional Conference on Data Mining.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dual adversarial colearning for multi-domain text classification",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yuhong",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "6438--6445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Wu and Yuhong Guo. 2020. Dual adversarial co- learning for multi-domain text classification. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 34, pages 6438-6445.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dual mixup regularized learning for adversarial domain adaptation",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Roby",
"suffix": ""
}
],
"year": 2020,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "540--555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Wu, Diana Inkpen, and Ahmed El-Roby. 2020. Dual mixup regularized learning for adversarial do- main adaptation. In European Conference on Com- puter Vision, pages 540-555. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Mixup regularized adversarial networks for multi-domain text classification",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "El-Roby",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2102.00467"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Wu, Diana Inkpen, and Ahmed El-Roby. 2021. Mixup regularized adversarial networks for multi-domain text classification. arXiv preprint arXiv:2102.00467.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multi-task learning with bidirectional language models for text classification",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Shang",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Yang and Lin Shang. 2019. Multi-task learning with bidirectional language models for text classification. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "How transferable are features in deep neural networks?",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Clune",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Hod",
"middle": [],
"last": "Lipson",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3320--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320-3328.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A convex formulation for learning task relationships in multi-task learning",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dit-Yan",
"middle": [],
"last": "Yeung",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1203.3536"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Zhang and Dit-Yan Yeung. 2012. A convex for- mulation for learning task relationships in multi-task learning. arXiv preprint arXiv:1203.3536.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Multiple source domain adaptation with adversarial training of neural networks",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Costeira",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "G",
"middle": [
"J"
],
"last": "Gordon",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.09684"
]
},
"num": null,
"urls": [],
"raw_text": "H. Zhao, S. Zhang, G. Wu, J.P. Costeira, J. Moura, and G.J. Gordon. 2017. Multiple source domain adap- tation with adversarial training of neural networks. arXiv preprint arXiv:1705.09684.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "The mismatch risk when aligning the marginal distributions in MDTC, we present the case containing two domains D 1 and D 2 . The blue regions denote distributions of D 1 , and the yellow regions denote distributions of D 2 . (a) The scenario before performing domain alignment. (2) When aligning the marginal distributions, a mismatch may occur with regard to the label.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "The architecture of the CAN model. A shared feature extractor F s learns to capture domain-invariant features; each domain-specific feature extractor F i d learns to capture domain-dependent features; a conditional domain discriminator D models shared feature distributions by conditioning on discriminative information provided by label predictions; a classifier C is used to conduct text classification; J C is the classification loss function; J E D is the entropy conditioning adversarial loss function which guides the domain-invariant feature extraction.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "When D is trained to its optimumD * , J D * is M log M .The optimum can be obtained if and only if P 1 (f, c) = P 2 (f, c) = ... = P M (f, c) = P (f, c). The parameter sensitivity analysis.",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>4.1 Experimental Settings</td></tr><tr><td>Dataset</td></tr></table>",
"text": "Statistics of the FDU-MTL dataset",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Domain</td><td colspan=\"7\">MT-CNN MT-DNN ASP-MTL MAN-L2 MAN-NLL MT-BL CAN(Proposed)</td></tr><tr><td>books</td><td>84.5</td><td>82.2</td><td>84.0</td><td>87.6</td><td>86.8</td><td>89.0</td><td>87.8 \u00b1 0.2</td></tr><tr><td>electronics</td><td>83.2</td><td>81.7</td><td>86.8</td><td>87.4</td><td>88.8</td><td>90.2</td><td>91.6 \u00b1 0.5</td></tr><tr><td>dvd</td><td>84.0</td><td>84.2</td><td>85.5</td><td>88.1</td><td>88.6</td><td>88.0</td><td>89.5 \u00b1 0.4</td></tr><tr><td>kitchen</td><td>83.2</td><td>80.7</td><td>86.2</td><td>89.8</td><td>89.9</td><td>90.5</td><td>90.8 \u00b1 0.3</td></tr><tr><td>apparel</td><td>83.7</td><td>85.0</td><td>87.0</td><td>87.6</td><td>87.6</td><td>87.2</td><td>87.0 \u00b1 0.7</td></tr><tr><td>camera</td><td>86.0</td><td>86.2</td><td>89.2</td><td>91.4</td><td>90.7</td><td>89.5</td><td>93.5 \u00b1 0.1</td></tr><tr><td>health</td><td>87.2</td><td>85.7</td><td>88.2</td><td>89.8</td><td>89.4</td><td>92.5</td><td>90.4 \u00b1 0.6</td></tr><tr><td>music</td><td>83.7</td><td>84.7</td><td>82.5</td><td>85.9</td><td>85.5</td><td>86.0</td><td>86.9 \u00b1 0.1</td></tr><tr><td>toys</td><td>89.2</td><td>87.7</td><td>88.0</td><td>90.0</td><td>90.4</td><td>92.0</td><td>90.0 \u00b1 0.3</td></tr><tr><td>video</td><td>81.5</td><td>85.0</td><td>84.5</td><td>89.5</td><td>89.6</td><td>88.0</td><td>88.8 \u00b1 0.4</td></tr><tr><td>baby</td><td>87.7</td><td>88.0</td><td>88.2</td><td>90.0</td><td>90.2</td><td>88.7</td><td>92.0 \u00b1 0.2</td></tr><tr><td>magazine</td><td>87.7</td><td>89.5</td><td>92.2</td><td>92.5</td><td>92.9</td><td>92.5</td><td>94.5 \u00b1 0.5</td></tr><tr><td>software</td><td>86.5</td><td>85.7</td><td>87.2</td><td>90.4</td><td>90.9</td><td>91.7</td><td>90.9 \u00b1 0.2</td></tr><tr><td>sports</td><td>84.0</td><td>83.2</td><td>85.7</td><td>89.0</td><td>89.0</td><td>89.5</td><td>91.2 \u00b1 0.7</td></tr><tr><td>IMDb</td><td>86.2</td><td>83.2</td><td>85.5</td><td>86.6</td><td>87.0</td><td>88.0</td><td>88.5 \u00b1 0.6</td></tr><tr><td>MR</td><td>74.5</td><td>75.5</td><td>76.7</td><td>76.1</td><td>76.7</td><td>75.7</td><td>77.1 \u00b1 0.9</td></tr><tr><td>AVG</td><td>84.5</td><td>84.3</td><td>86.1</td><td>88.2</td><td>88.4</td><td>88.6</td><td>89.4 \u00b1 0.1</td></tr></table>",
"text": "MDTC classification accuracies on the Amazon review dataset.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "MDTC classification accuracies on the FDU-MTL dataset.",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Method</td><td>Books DVD Electr. Kit.</td><td>AVG</td></tr><tr><td>CAN (full)</td><td colspan=\"2\">83.76 84.68 88.34 90.03 86.70</td></tr><tr><td>CAN w/o C</td><td colspan=\"2\">82.45 84.45 87.30 89.65 85.96</td></tr><tr><td>CAN w/o E</td><td>83.60</td><td/></tr></table>",
"text": "84.80 87.70 89.40 86.38 CAN w/o CE 82.98 84.03 87.06 88.57 85.66",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table/>",
"text": "Ablation study analysis on the Amazon review dataset.",
"num": null,
"type_str": "table"
},
"TABREF8": {
"html": null,
"content": "<table/>",
"text": "Unsupervised multi-source domain adaptation results on the Amazon review dataset.",
"num": null,
"type_str": "table"
}
}
}
}