{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:03.669670Z" }, "title": "Pseudo-Label Guided Unsupervised Domain Adaptation of Contextual Embeddings", "authors": [ { "first": "Tianyu", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "tianyuc@buaa.edu.cn" }, { "first": "Shaohan", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "shaohanh@microsoft.com" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "", "affiliation": {}, "email": "fuwei@microsoft.com" }, { "first": "Jianxin", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Beihang", "middle": [], "last": "University", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Microsoft", "middle": [], "last": "Research", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Contextual embedding models such as BERT can be easily fine-tuned on labeled samples to create a state-of-the-art model for many downstream tasks. However, the fine-tuned BERT model suffers considerably from unlabeled data when applied to a different domain. In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples. In this paper, we propose a pseudo-label guided method for unsupervised domain adaptation. Two models are fine-tuned on labeled source samples as pseudo labeling models. To learn representations for the target domain, one of those models is adapted by masked language modeling from the target domain. Then those models are used to assign pseudo-labels to target samples. We train the final model with those samples. We evaluate our method on named entity segmentation and sentiment analysis tasks. These experiments show that our approach outperforms baseline methods.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Contextual embedding models such as BERT can be easily fine-tuned on labeled samples to create a state-of-the-art model for many downstream tasks. However, the fine-tuned BERT model suffers considerably from unlabeled data when applied to a different domain. In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples. In this paper, we propose a pseudo-label guided method for unsupervised domain adaptation. Two models are fine-tuned on labeled source samples as pseudo labeling models. To learn representations for the target domain, one of those models is adapted by masked language modeling from the target domain. Then those models are used to assign pseudo-labels to target samples. We train the final model with those samples. We evaluate our method on named entity segmentation and sentiment analysis tasks. These experiments show that our approach outperforms baseline methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Contextualized embeddings have become the foundations of many state-of-the-art natural language processing technologies (Devlin et al., 2018; Han and Eisenstein, 2019; Strakov\u00e1 et al., 2019) . Pretrained contextualized embeddings can be used for many downstream tasks and be incorporated into an end-to-end system, allowing the embeddings to be fine-tuned from task-specific labeled data (Akbik et al., 2019a (Akbik et al., ,b, 2018 Beltagy et al., 2019) .", "cite_spans": [ { "start": 120, "end": 141, "text": "(Devlin et al., 2018;", "ref_id": "BIBREF5" }, { "start": 142, "end": 167, "text": "Han and Eisenstein, 2019;", "ref_id": "BIBREF8" }, { "start": 168, "end": 190, "text": "Strakov\u00e1 et al., 2019)", "ref_id": "BIBREF18" }, { "start": 388, "end": 408, "text": "(Akbik et al., 2019a", "ref_id": "BIBREF0" }, { "start": 409, "end": 432, "text": "(Akbik et al., ,b, 2018", "ref_id": null }, { "start": 433, "end": 454, "text": "Beltagy et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the problems with contextual embedding models is that although fine-tuned models perform well on the samples generated from the same distribution as the training samples, they suffer considerably from unlabeled data when applied to a different domain (Saito et al., 2017; Rietzler et al., Figure 1: Overview of our training framework. We jointly fine-tune general pre-trained model and target domain pre-trained model with labeled source data. Then we generate pseudo-labels on target samples. Finally, we train the final model with the pseudo-labeled samples. Ruder and Plank, 2018) . For example, a named entity segmentation model trained on a news dataset fails to predict correctly on social media data such as Twitter. Because collecting many labeled samples in various domains is expensive, it is important to adapt contextual embedding model to different domains in unsupervised setting.", "cite_spans": [ { "start": 258, "end": 278, "text": "(Saito et al., 2017;", "ref_id": "BIBREF17" }, { "start": 279, "end": 295, "text": "Rietzler et al.,", "ref_id": "BIBREF14" }, { "start": 568, "end": 590, "text": "Ruder and Plank, 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many domain adaptation methods of neural networks in NLP have been proposed in the past several years (Li, 2012; Reichart, 2019, 2018; Cui et al., 2018; Louizos et al., 2015; Ganin et al., 2015; Mou et al., 2016) . Our work focuses on unsupervised domain adaptation of contextual embeddings. We aim to fine-tune a pre-trained model that works well on a target domain when provided with labeled source samples and unlabeled target samples. With no access to the labels in target domain data, it is very difficult to adapt when the divergence of the label distribution between the source domain and the target domain is huge.", "cite_spans": [ { "start": 102, "end": 112, "text": "(Li, 2012;", "ref_id": "BIBREF10" }, { "start": 113, "end": 134, "text": "Reichart, 2019, 2018;", "ref_id": null }, { "start": 135, "end": 152, "text": "Cui et al., 2018;", "ref_id": "BIBREF4" }, { "start": 153, "end": 174, "text": "Louizos et al., 2015;", "ref_id": "BIBREF11" }, { "start": 175, "end": 194, "text": "Ganin et al., 2015;", "ref_id": "BIBREF6" }, { "start": 195, "end": 212, "text": "Mou et al., 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some of current methods propose a simple unsupervised domain-adaptive method, using a masked language modeling objective over unlabeled text in the target domain (Rochette et al., 2019; Han and Eisenstein, 2019; Gururangan et al., 2020) . They first learn discriminative representations for the tar-get domain and then fine-tune a domain-adapted model with labeled source samples. Although selfsupervised fine-tuning in the target domain improves the generalization of the pre-trained model for the target domain, an adapted model cannot capture the task-specific pattern of the target domain only using labeled source samples to fine-tune. We expect the adapted model to not only acquire some target-discriminative language representations but also to obtain some task-specific features of target domain.", "cite_spans": [ { "start": 162, "end": 185, "text": "(Rochette et al., 2019;", "ref_id": "BIBREF15" }, { "start": 186, "end": 211, "text": "Han and Eisenstein, 2019;", "ref_id": "BIBREF8" }, { "start": 212, "end": 236, "text": "Gururangan et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a pseudo-label guided method for unsupervised domain adaptation. As shown in Figure 1 , two models are jointly finetuned on labeled source samples as pseudo labeling models. We design a multiview constraint loss to encourage those two model to make predictions based on different view-point. To learn representations for the target domain, one of those models is adapted by masked language modeling from the target domain. Then those models are used to assign pseudo-labels to target samples. Pseudo-labeled target samples will provide target discriminative information to the model. We train the final model with the pseudo-labeled samples.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 111, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our method both on named entity segmentation (NES) and sentiment analysis (SA) tasks. We find that our pseudo-label guided method outperforms baseline methods. Moreover, we demonstrate the multiview constraint significantly improves performance of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In unsupervised domain adaptation, we aim to train a model that works well on a target domain when provided with labeled source samples and unlabeled target samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 Overview", "sec_num": "2" }, { "text": "As illustrated in Figure 1 , Pseudo-label Guided unsupervised Adaptation (PGA) consists of three steps: jointly fine-tuning, pseudo label generation and final adaptation. First, we initialize two pretrained models with the same architecture as the general pre-trained model and the target domain pre-trained model (TDPM) which leverages language modeling objective on unlabeled data from target domain. In the second step, we use those two fine-tuned model to make predictions for unlabeled target samples. If both models agree with the prediction and two prediction scores exceed a threshold, the prediction is regarded as a pseudo label. Finally, all pseudo labels are collected to fine-tune the target domain pre-trained model and complete the adaptation.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Methodology 2.1 Overview", "sec_num": "2" }, { "text": "In the first stage, we jointly fine-tune the general pre-trained model and the target domain pre-trained model on the source domain data to obtain two classification models F 1 and F 2 . Their predictions are utilized to give pseudo-labels. We assume each sample in the source domain can be denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "(X i , Y i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "where X is a text sequence and Y is a label sequence for NES task or a label for SA task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "For named entity segmentation task, the token representations are fed into an output layer at the output. For sentiment analysis, the [CLS] representation is fed into an output layer for classification (Devlin et al., 2018) .", "cite_spans": [ { "start": 202, "end": 223, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "Inspired by the asymmetric tri-training adaptation method (Saito et al., 2017) , we design a multiview constraint loss to encourage model F 1 and F 2 to make predictions based on different viewpoints. We add the term |W T 1 W 2 | to the cost function, where W 1 , W 2 denote output layers weights of model F 1 and F 2 . With this constraint, each model learns from different features. The objective of learning F 1 , F 2 is defined as:", "cite_spans": [ { "start": 58, "end": 78, "text": "(Saito et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E(\u03b8 F 1 , \u03b8 F 2 ) = CE loss (F 1 (x i ), y i )+ CE loss (F 2 (x i ), y i ) + \u03bb|W T 1 W 2 |", "eq_num": "(1)" } ], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "where CE loss denotes the standard cross entropy loss and we decide the trade-off parameter \u03bb based on validation split. With the multiview learning objective, the pseudo labels can be more informative and improve model accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Jointly Fine-tuning", "sec_num": "2.2" }, { "text": "After jointly fine-tuning, classification model F 1 and F 2 are used to generate pseudo labels. Pseudo labels will provide target-discriminative information to the model. However, since they certainly contain false labels, we have to pick up reliable pseudo-labels. For text sequence X in the unlabeled target domain data, we add pseudo annotations Y to the sequence in the NES task and add single pseudo label Y to the sequence in the SA task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo Label Generation", "sec_num": "2.3" }, { "text": "There are two requirements for pseudo label assignment. Take the NES task as an example. First, for each token X i in the text sequence, when C 1 i and C 2 i denote the class which have the maximum predicted probability for X i from model F 1 and model F 2 respectively, we require C 1 i = C 2 i , which means the two models agree with the prediction. The second requirement is that the probability of C 1 i or C 2 i exceeds the threshold parameter, which we set as 0.5 in the experiment. We suppose that unless two models are confident of their predictions, the prediction is not reliable. If the two requirements are satisfied, the label is added to the pseudo target samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo Label Generation", "sec_num": "2.3" }, { "text": "Intuitively, if the two domains are closely related, the pseudo labels are assigned to a large portion of target samples while distant domains will reduce the amount of agreement. We expect the threshold of agreement to keep the pseudo labels reliable and maintain a probable number of samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pseudo Label Generation", "sec_num": "2.3" }, { "text": "We use the pseudo-labeled target samples to construct a training set of the target domain and further fine-tune the target domain pre-trained model with this training set. Since the accuracy of pseudo labels can not be assured, we use a smaller learning rate and fewer training steps to fine-tune F 2 with pseudo labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Adaptation", "sec_num": "2.4" }, { "text": "The whole training algorithm is depicted in Algorithm 1, where we take labeled source samples and unlabeled target samples and output the adapted model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Final Adaptation", "sec_num": "2.4" }, { "text": "Named Entity Segmentation (NES) The named entity segmentation is a typical task of the sequence labeling task. Different from named entity recognition, we only predict the \"BIO\" format in a sequence, which divides the sequence into entity chunks without deciding which class the entity chunk belongs to. We choose the shared task of the 2016 workshop on Noisy User Text (WNUT; Strauss et al., 2016) as target domain and the canonical CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) as the source domain. In the WNUT task, the corpus was from Twitter, which is an open-domain source and the data of the CoNLL 2003 task was annotated on a corpus of newstext.", "cite_spans": [ { "start": 377, "end": 398, "text": "Strauss et al., 2016)", "ref_id": "BIBREF19" }, { "start": 457, "end": 494, "text": "(Tjong Kim Sang and De Meulder, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Sentiment Analysis (SA) The sentiment analysis is a sequence classification task. Since we need to assign a sentiment class to each whole sentence, the task is more relied on contextualized level information. We choose 3 domains from open Amazon Algorithm 1 We jointly fine-tune two models and generate pseudo labels for final adaptation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Input:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "S = {(x i , t i )} m i=1 , T = (x j ) n j=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Train TDPM on T with language modeling objective Initialize F 1 with original BERT Initialize (He and McAuley, 2016) including books, electronics and kitchens, following the settings of Ruder and Plank's domain adaptation survey (Ruder and Plank, 2018) . The data statistics and hyper-parameters see Appendix A and B. The source code * and sentiment analysis data set will be released at a future date.", "cite_spans": [ { "start": 94, "end": 116, "text": "(He and McAuley, 2016)", "ref_id": "BIBREF9" }, { "start": 229, "end": 252, "text": "(Ruder and Plank, 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "F 2 with TDPM Train F 1 , F 2 with Equation 1 Initialize T l = \u2205 for x j in T do y 1 j = F 1 (x j ) y 2 j = F 2 (x j ) C 1 j = argmax(y 1 j ) C 2 j = argmax(y 2 j ) if C 1 j == C 2 j and max(y 1 j , y 2 j ) > threshold then Add (x j , y 1 j ) to T l end if end for Train F 2 on T l with supervised learning Output:F 2 review data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We evaluate the following systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "Source only: This baseline directly fine-tunes the pre-trained BERT on the source domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "Frozen BERT: This baseline first learns from unlabeled data from target domain by language modeling. Then it freezes the BERT encoder and only optimizes the classifier layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "AdaptaBERT: This baseline first learns from unlabeled data from target domain by language modeling. Then it fine-tunes target domain pretrained model with labeled source samples \u2020 . (Han and Eisenstein, 2019) .", "cite_spans": [ { "start": 182, "end": 208, "text": "(Han and Eisenstein, 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "PGA: Our pseudo-label guided unsupervised adaptation method in Section 2. PGA w/o MC: Our pseudo-label guided unsupervised adaptation method. Notice that the multiview constraint (MC) is not used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "Upper Bound: In supervised learning, we finetune the target domain pre-trained model directly on target training set and evaluate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "As indicated in Table 1 , AdaptaBERT shows strong performance of unsupervised adaptation, achieving a much better F1 score than zero-shot setting in source only. Moreover, even without multiview constraint, our PGA method performs better than AdaptaBERT. With multiview constraint, our method learns a higher quality of pseudo labels, which pushes the F1 score closer to the upper bound.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "We present the domain adaptation results of sentiment analysis at Table 2 , without a multiview constraint, the quality of pseudo labels can not be assured and sometimes leads to a drop in performance for target domain. However, our PGA method with multiview constraint can improve model accuracy in most scenarios. We can observe that supervised learning still greatly outperforms unsupervised methods. There is more room to improve for unsupervised domain adaptation in sentiment analysis task. ", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "We scientifically evaluate the quality of pseudo labels generated by our method. First, we find the amount of pseudo labels is closely related to the value of our threshold in Figure 2 . When the threshold is set above 0.5, the number of pseudo labels drops quickly while the accuracy of pseudo labels increases a lot. With multiview constraint, our PGA method tend to generate fewer labels with higher accuracy. It is observed that higher threshold may not benefit the final model accuracy, partly due to the significant drop of the number of pseudo labels. We also evaluate the accuracy of pseudo labels in the SA task. The results are illustrated in Table 3 . We find in different domain adaptation settings, the multiview constraint can improve the quality of pseudo labels.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 653, "end": 660, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Pseudo Labels Quality", "sec_num": "3.4" }, { "text": "We propose a new unsupervised domain adaptation method guided by pseudo labels. Generated by general pre-trained model and target domain pre-trained model with multiview constraint, the pseudo labels of unlabeled target data are more reliable and benefit model performance on the target domain. Experiments show that our approach achieves very promising results on different NLP downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "* Implementation retrieved from https://github. com/huggingface/transformers \u2020 Use the released code to conduct our experiment https://github.com/xhan77/AdaptaBERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We use a bert-base-cased model from huggingface \u2021 as initial parameter checkpoint. In supervised learning stage, we fine-tune the model for 3 to 5 epochs with batch size between [16, 32] and learning rate between [2e-5, 5e-5]. The weight decay of model parameters has been set as 0.1. Adam optimizer has been adopted with a warm up ratio of 0.1. In the unsupervised language modeling stage, the rate of masking tokens has been set as 0.15.We train the model for 3 epochs and adopt the same optimizer settings in supervised learning. We set the threshold parameter as 0.5 for the best performance model in both NES and SA task.", "cite_spans": [ { "start": 178, "end": 182, "text": "[16,", "ref_id": null }, { "start": 183, "end": 186, "text": "32]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Hyperparameters", "sec_num": null }, { "text": "In this section, we will introduce the data statistics of our named entity segmentation and sentiment analysis . In Table 4 , the data are from origin AdaptaBERT (Han and Eisenstein, 2019) , of which Twitter dataset has more unlabeled dev data than labeled train data. In Table 5 , the data is collected from open Amazon review (He and McAuley, 2016; McAuley et al., 2015) . We process the data into 3 domains of balanced datasets. In the pre-processing stage, we exclude the text whose length is too short (shorter than 1 valid token) or too long (more than 256 valid tokens). The sentiment analysis data has been shared via google drive \u00a7 .", "cite_spans": [ { "start": 162, "end": 188, "text": "(Han and Eisenstein, 2019)", "ref_id": "BIBREF8" }, { "start": 328, "end": 350, "text": "(He and McAuley, 2016;", "ref_id": "BIBREF9" }, { "start": 351, "end": 372, "text": "McAuley et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 4", "ref_id": null }, { "start": 272, "end": 279, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "B Data Statistics", "sec_num": null }, { "text": "CoNLL 14986 10,000 Twitter 2394 3852 Table 4 : Data statistics of named entity segmentation.", "cite_spans": [], "ref_spans": [ { "start": 37, "end": 44, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Dataset Train Dev", "sec_num": null }, { "text": "Train Dev Books 100,000 10,000 Electronics 100,000 10,000 Kitchens 100,000 10,000 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "We use a single Tesla P40 card to conduct all our experiments. The average runtime of each approach is 3 minutes for named entity segmentation and 30 minutes for sentiment analysis. The number of parameters in our model is about 110M , same as the bert-base model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experiment details", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Flair: An easy-to-use framework for state-of-the-art nlp", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Rasul", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "54--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019a. Flair: An easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics (Demonstrations), pages 54- 59.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pooled contextualized embeddings for named entity recognition", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "724--728", "other_ids": { "DOI": [ "10.18653/v1/N19-1078" ] }, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019b. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 724-728, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Scibert: A pretrained language model for scientific text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3606--3611", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3606- 3611.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A comparative study of pivot selection strategies for unsupervised domain adaptation. The Knowledge Engineering Review", "authors": [ { "first": "Xia", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Noor", "middle": [], "last": "Al-Bazzas", "suffix": "" }, { "first": "F", "middle": [ "P" ], "last": "Bollegala", "suffix": "" }, { "first": "", "middle": [], "last": "Coenen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xia Cui, Noor Al-Bazzas, DANUSHKA Bollegala, and FP Coenen. 2018. A comparative study of pivot selection strategies for unsupervised domain adapta- tion. The Knowledge Engineering Review.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Domain-adversarial training of neural networks", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Evgeniya", "middle": [], "last": "Ustinova", "suffix": "" }, { "first": "Hana", "middle": [], "last": "Ajakan", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Germain", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Laviolette", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Marchand", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Lempitsky", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavio- lette, Mario Marchand, and Victor Lempitsky. 2015. Domain-adversarial training of neural networks.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised domain adaptation of contextualized embeddings for sequence labeling", "authors": [ { "first": "Xiaochuang", "middle": [], "last": "Han", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4229--4239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsu- pervised domain adaptation of contextualized em- beddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4229-4239.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", "authors": [ { "first": "Ruining", "middle": [], "last": "He", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" } ], "year": 2016, "venue": "proceedings of the 25th international conference on world wide web", "volume": "", "issue": "", "pages": "507--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507-517.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Literature survey: domain adaptation algorithms for natural language processing", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "8--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Li. 2012. Literature survey: domain adaptation algo- rithms for natural language processing. Department of Computer Science The Graduate Center, The City University of New York, pages 8-10.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The variational fair autoencoder", "authors": [ { "first": "Christos", "middle": [], "last": "Louizos", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Swersky", "suffix": "" }, { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Max", "middle": [], "last": "Welling", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Image-based recommendations on styles and substitutes", "authors": [ { "first": "Julian", "middle": [], "last": "Mcauley", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Targett", "suffix": "" }, { "first": "Qinfeng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "43--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recom- mendations on styles and substitutes. In Proceed- ings of the 38th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 43-52.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "How transferable are neural networks in nlp applications", "authors": [ { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Zhao", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016. How transferable are neural networks in nlp applications?", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification", "authors": [ { "first": "Alexander", "middle": [], "last": "Rietzler", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Stabinger", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Opitz", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Engl", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.11860" ] }, "num": null, "urls": [], "raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2019. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. arXiv preprint arXiv:1908.11860.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised domain adaptation of contextual embeddings for low-resource duplicate question detection", "authors": [ { "first": "Alexandre", "middle": [], "last": "Rochette", "suffix": "" }, { "first": "Yadollah", "middle": [], "last": "Yaghoobzadeh", "suffix": "" }, { "first": "Timothy", "middle": [ "J" ], "last": "Hazen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02645" ] }, "num": null, "urls": [], "raw_text": "Alexandre Rochette, Yadollah Yaghoobzadeh, and Tim- othy J Hazen. 2019. Unsupervised domain adap- tation of contextual embeddings for low-resource duplicate question detection. arXiv preprint arXiv:1911.02645.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Strong baselines for neural semi-supervised learning under domain shift", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder and Barbara Plank. 2018. Strong base- lines for neural semi-supervised learning under do- main shift. CoRR, abs/1804.09530.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Asymmetric tri-training for unsupervised domain adaptation", "authors": [ { "first": "Kuniaki", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Yoshitaka", "middle": [], "last": "Ushiku", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Harada", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "2988--2997", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric tri-training for unsupervised do- main adaptation. In Proceedings of the 34th Interna- tional Conference on Machine Learning-Volume 70, pages 2988-2997. JMLR. org.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural architectures for nested NER through linearization", "authors": [ { "first": "Jana", "middle": [], "last": "Strakov\u00e1", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5326--5331", "other_ids": { "DOI": [ "10.18653/v1/P19-1527" ] }, "num": null, "urls": [], "raw_text": "Jana Strakov\u00e1, Milan Straka, and Jan Hajic. 2019. Neu- ral architectures for nested NER through lineariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Results of the WNUT16 named entity recognition shared task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Strauss", "suffix": "" }, { "first": "Bethany", "middle": [], "last": "Toma", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)", "volume": "", "issue": "", "pages": "138--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Strauss, Bethany Toma, Alan Ritter, Marie- Catherine de Marneffe, and Wei Xu. 2016. Results of the WNUT16 named entity recognition shared task. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 138-144, Os- aka, Japan. The COLING 2016 Organizing Commit- tee.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "Tjong", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", "volume": "", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Pivot based language modeling for improved neural domain adaptation", "authors": [ { "first": "Yftah", "middle": [], "last": "Ziser", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1241--1251", "other_ids": { "DOI": [ "10.18653/v1/N18-1112" ] }, "num": null, "urls": [], "raw_text": "Yftah Ziser and Roi Reichart. 2018. Pivot based lan- guage modeling for improved neural domain adapta- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 1241-1251, New Orleans, Louisiana. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Task refinement learning for improved accuracy and stability of unsupervised domain adaptation", "authors": [ { "first": "Yftah", "middle": [], "last": "Ziser", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5895--5906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yftah Ziser and Roi Reichart. 2019. Task refinement learning for improved accuracy and stability of unsu- pervised domain adaptation. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 5895-5906.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "The threshold influences the amount of pseudo labels. We evaluate on books to electronics domain adaptation of sentiment analysis." }, "TABREF2": { "type_str": "table", "num": null, "text": "Multi-domain sentiment analysis adaptation. All results are evaluated with accuracy score.", "content": "", "html": null }, "TABREF4": { "type_str": "table", "num": null, "text": "The accuracy of pseudo labels.", "content": "
", "html": null } } } }