ACL-OCL / Base_JSON /prefixA /json /adaptnlp /2021.adaptnlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
86.7 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:10.038994Z"
},
"title": "Multidomain Pretrained Language Models for Green NLP",
"authors": [
{
"first": "Antonis",
"middle": [],
"last": "Maronikolakis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS, LMU Munich",
"location": {}
},
"email": ""
},
{
"first": "Sch\u00fctze",
"middle": [],
"last": "Hinrich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS, LMU Munich",
"location": {}
},
"email": ""
},
{
"first": "Lmu",
"middle": [],
"last": "Cis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS, LMU Munich",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Munich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CIS, LMU Munich",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When tackling a task in a given domain, it has been shown that adapting a model to the domain using raw text data before training on the supervised task improves performance versus solely training on the task. The downside is that a lot of domain data is required and if we want to tackle tasks in n domains, we require n models each adapted on domain data before task learning. Storing and using these models separately can be prohibitive for low-end devices. In this paper we show that domain adaptation can be generalised to cover multiple domains. Specifically, a single model can be trained across various domains at the same time with minimal drop in performance, even when we use less data and resources. Thus, instead of training multiple models, we can train a single multidomain model saving on computational resources and training time.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "When tackling a task in a given domain, it has been shown that adapting a model to the domain using raw text data before training on the supervised task improves performance versus solely training on the task. The downside is that a lot of domain data is required and if we want to tackle tasks in n domains, we require n models each adapted on domain data before task learning. Storing and using these models separately can be prohibitive for low-end devices. In this paper we show that domain adaptation can be generalised to cover multiple domains. Specifically, a single model can be trained across various domains at the same time with minimal drop in performance, even when we use less data and resources. Thus, instead of training multiple models, we can train a single multidomain model saving on computational resources and training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Domain adaptation in the form of training on unlabeled data has been prevalent in recent work (Rietzler et al., 2020; Han and Eisenstein, 2019) . When given a task T in domain D, it is useful to adapt our model on raw text data pertinent to D before supervised training on the labeled data of T .",
"cite_spans": [
{
"start": 94,
"end": 117,
"text": "(Rietzler et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 118,
"end": 143,
"text": "Han and Eisenstein, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, domain adaptation is expensive and costly. So even though results do get better, there needs to be a balance between use of computational resources and model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Accentuating the issue is the rising dominance of increasingly larger models (Devlin et al., 2019; Brown et al., 2020) . These large models require a lot of data and computational resources to train. Not only have these resources been prohibitive for smaller labs, but the environmental impact of training such large models cannot be understated either (Strubell et al., 2019; Lacoste et al., 2019) .",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 99,
"end": 118,
"text": "Brown et al., 2020)",
"ref_id": null
},
{
"start": 353,
"end": 376,
"text": "(Strubell et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 377,
"end": 398,
"text": "Lacoste et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is therefore a need to build models that can tackle tasks across multiple domains, much in the same way multilingual models are able to operate across multiple languages. These multidomain models, to be useful, need to exhibit performance comparable to models adapted to a single domain. Then, these models can be trained and deployed with reduced computational costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we explore such multidomain models. We compare multidomain DistilBERT (Sanh et al., 2019 ) models with single-domain Distil-BERT models 1 . In our analysis, we test the multidomain model on tasks from multiple domains (including MultiNLI), we examine how important adaptation order is for performance and we show that training on the domains jointly or sequentially does not impact effectiveness. We also reproduce findings in Rietzler et al. (2020) , where the authors showed that pretraining on an irrelevant domain is not beneficial. Finally, we show that much like multilingual BERT, multidomain models can be used for better performance in low-resource domains, with the low-resource domain leveraging the larger pretrained model.",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "(Sanh et al., 2019",
"ref_id": "BIBREF19"
},
{
"start": 441,
"end": 463,
"text": "Rietzler et al. (2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It has been shown empirically that domain adaptation improves model performance (Gururangan et al., 2020) . Specifically, the authors experimented on RoBERTa (Liu et al., 2019) and eight tasks ranging across four domains (computer science and biomedical papers, reviews and news). Performance increased when adapting to a domain pertinent to the domain of the task, compared to when the two domains were not as related.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Gururangan et al., 2020)",
"ref_id": null
},
{
"start": 158,
"end": 176,
"text": "(Liu et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Work has also been done to alleviate the environmental impact of training larger models. In Poerner et al. (2020) , an inexpensive method was proposed for domain adaptation, which can be performed on a CPU and significantly reduces training cost.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Poerner et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, MobileBERT is a downsized BERT model with around 4 times fewer parameters (Sun et al., 2020) than the original BERT-base model. There is a need to fit larger models in low-capacity machines, with MobileBERT and our work being a step in that direction, alongside other work in the pruning area (Sanh et al., 2020; Zhao et al., 2020) .",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "(Sun et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 302,
"end": 321,
"text": "(Sanh et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 322,
"end": 340,
"text": "Zhao et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In Gururangan et al. (2020) , the authors showed that adaptation using domain data improves performance on a downstream task. In our work, we trained two types of models, single-domain models (one for each domain) and a multidomain model. All models were pretrained from scratch.",
"cite_spans": [
{
"start": 3,
"end": 27,
"text": "Gururangan et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Raw Text Domain Data",
"sec_num": "3.1"
},
{
"text": "For the single-domain models, we used around 4GBs of each dataset, adapting DistilBERT on each domain for 1 epoch. This resulted in four distinct models. For our multidomain model, we adapted our model successively on all domains, using around half of the available data (approximately 2GBs from each domain) for 1 epoch as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Raw Text Domain Data",
"sec_num": "3.1"
},
{
"text": "The datasets we used for adaptation are: Amazon (He and McAuley, 2016) , Arxiv (Cohan et al., 2018) , Realnews (Zellers et al., 2019) , CS (Lo et al., 2020) and Reddit Comments (V\u00f6lske et al., 2017) .",
"cite_spans": [
{
"start": 48,
"end": 70,
"text": "(He and McAuley, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 79,
"end": 99,
"text": "(Cohan et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 111,
"end": 133,
"text": "(Zellers et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 139,
"end": 156,
"text": "(Lo et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 177,
"end": 198,
"text": "(V\u00f6lske et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Raw Text Domain Data",
"sec_num": "3.1"
},
{
"text": "The original datasets were truncated for our experiments, using approximately only the first 4GBs of text. For the Amazon dataset, which contains reviews across multiple products (eg. books, music, clothing), we took care to balance data across all categories, by sampling at random n=50,000 reviews from products with more reviews than n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Raw Text Domain Data",
"sec_num": "3.1"
},
{
"text": "Models were evaluated on eight supervised classification tasks in total, spanning four domains. An overview and description of the tasks can be found in Appendix A. Each model was trained on each task separately. For most of the datasets, train/dev/test splits are already provided. Where such splits are not available, we randomly sample 60/20/20 sets from the original data for train/dev/test splits respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Task Data",
"sec_num": "3.2"
},
{
"text": "The training of our models is broken up in two steps: domain adaptation and supervised task learning. Furthermore, we have two setups for our ex-periments: a) perform domain adaptation on four models in parallel and then train them on each individual task, and b) perform domain adaptation on a single model for all domains successively before task learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4"
},
{
"text": "We also perform ablation studies on irrelevant and low-resource domain adaptation, domain adaptation order and finally investigate whether joint instead of sequential adaptation performs better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4"
},
{
"text": "In our experiments we used DistilBERT (Sanh et al., 2019) . DistilBERT is a lightweight BERTbased model that showcases performance remarkably close to BERT, with around 40% fewer parameters. Training was performed on a publicly available 2 pretrained model. First, we perform domain adaptation on each of the four domains separately. This results in four distinct models, each adapted to a different domain. Hyperparameters can be found in Appendix B. We also train a single model successively on all four domains, resulting in a multidomain model. All possible domain adaptation orders were compared and for our comparisons we chose the order with the worst overall performance, which is Amazon \u2192 Reddit Comments \u2192 Realnews \u2192 Arxiv (Am-RC-R-Ar). Further details on order comparison can be found in 5.4.",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "4.1"
},
{
"text": "After the domain adaptation phase, the singledomain and multidomain models were trained on the training data of each task. Training took place over 1 or 2 epochs, trying to keep training time approximately equal across all tasks. Namely, all tasks required 1 epoch to train, except the ACL-ARC and HyperPartisan tasks, for which the models were trained for 2 epochs. More details on the hyperparameters are available in Appendix C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Task Learning",
"sec_num": "4.2"
},
{
"text": "Evaluation took place across eight tasks, covering all four domains. As an added experiment, we also evaluated on the MultiNLI dataset (Williams et al., 2018) . MultiNLI is a dataset for textual entailment consisting of sentence pairs spanning multiple genres. We made three runs over each task and averaged the results, shown in Table 1 .",
"cite_spans": [
{
"start": 135,
"end": 158,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Base vs. Single-domain vs. Multidomain",
"sec_num": "5.1"
},
{
"text": "Note that the multidomain model used here (Am-RC-R-Ar) comes from the lowest-performing domain adaptation order. We chose it to illustrate that even in the worst case scenario, there is still a substantial improvement over the base model. On average, the gains are even larger (Section 5.4). When adapting DistilBERT to a single domain, performance is greater compared to the base model. When adapting to all domains, performance still increases, although by less on average. The average increase from base DistilBERT to single-domain DistilBERT is 2.0, whereas the multidomain model shows an improvement of 1.4 over the respective base model. Overall, performance increases across tasks when adapting to all domains and in some cases the multidomain model is better than the single-domain model. Also, performance never drops for the multidomain model which, in the worst cases, still achieves marginally higher accuracy than the base model. At the same time, on MultiNLI the multidomain model scores higher than both base and all single-domain models, showcasing its domainagnostic capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base vs. Single-domain vs. Multidomain",
"sec_num": "5.1"
},
{
"text": "It is thus shown that multidomain models provide a boost in most tasks while never hindering performance in any task. They do so while requiring less data; from the 16GBs needed to train the four single models (4GBs for each domain), we only require 8GBs (2GBs for each domain) to train the multidomain model with comparable results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base",
"sec_num": null
},
{
"text": "One of the advantages of multilingual models is that low-resource languages can leverage the multilingual model. A model is trained on multiple languages before the low-resource language is added. The resulting model performs better than a model trained only on the low-resource language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "We examine whether this holds true for multidomain models as well. For this experiment, we include the biomedical domain by further adapting our multidomain model to the Pubmed (Lo et al., 2020) dataset. Namely, we experiment with 10MB, 100MB and 500MB Pubmed datasets added to Am-RC-R-Ar. After pretraining on each of the new, smaller Pubmed sets, we test our models on ChemProt (Kringelum J, 2016) and Pubmed-RCT (Dernoncourt and Lee, 2017) . As a baseline, we pretrain DistilBERT on solely the Pubmed datasets. We also evaluate how the original multidomain model does without any biomedical data. Finally, we examine if training on the low-resource domain has a catastrophic effect on the previously learned domains. Results are shown in Table 2 .",
"cite_spans": [
{
"start": 177,
"end": 194,
"text": "(Lo et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 415,
"end": 442,
"text": "(Dernoncourt and Lee, 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 741,
"end": 748,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "Due to differences in dataset sizes, care was taken to keep training times approximately equal for all setups. For Pub-500, we trained for 1 epoch. For Pub-100 we trained for 5 epochs and for Pub-10 we trained for 50 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "When pretraining only on Pubmed, the amount of training data used does not have an impact on performance. All of Pub-10/100/500 perform similarly. Performance on ChemProt is higher than the multidomain model by around 0.6, while on the rest of the tasks accuracy is not as high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "If we further adapt the multidomain model to the Pubmed sets, we get a larger improvement over the original multidomain model, not only on the tasks in the biomedical domain, but over all examined tasks. In ChemProt, the improvement is around 2.4 over the original multidomain model and around 1.0 over the single-domain Pubmed models. For the rest of the tasks we see marginal improvements across the board and on average the new multidomain model (Am-RC-R-Ar-P) performs the best out of all models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "Results here indicate that performance is improved when continuously adapting to a lowresource domain than simply pretraining a model on it, regardless of domain data size. In fact, performance improves overall, possibly because of the increased amount of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource Domains",
"sec_num": "5.2"
},
{
"text": "Pub-10 Pub-100 Pub-500 +Pub-10 +Pub-100 +Pub-500 ACL-ARC 70. Table 2 : Comparison of a) our main multidomain model (Multi), b) models pretrained solely on Pubmed (Pub-10/100/500), and c) models after continued adaptation to Pubmed (+Pub-10/100/500).",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi",
"sec_num": null
},
{
"text": "To establish whether the multidomain model is indeed benefiting from exposure to multiple domains, or whether this is a case where more data means better modeling, we train a model on solely Amazon using as much data as the total amount of data in the main multidomain model (roughly 8GBs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to Irrelevant Domain",
"sec_num": "5.3"
},
{
"text": "We show that this model performs worse than the multidomain model (Appendix D). Thus, it is the use of multiple domains that is beneficial in this setting and not strictly the amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation to Irrelevant Domain",
"sec_num": "5.3"
},
{
"text": "Experiments were conducted to determine how much domain adaptation order affects performance. In our main comparisons, the domain order was Amazon \u2192 Reddit Comments \u2192 Realnews \u2192 Arxiv (Am-RC-R-Ar). Here we examine the accuracy of the rest of the possible adaptation orders. The average performance of all orders across the given tasks is 81.4, with a minimum of 81.1 and a maximum of 81.7, whereas base DistilBERT scores an average of 79.3. In the worst case, there is still an improvement of 1.8, while on average we see an improvement of 2.1 over the base model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Order",
"sec_num": "5.4"
},
{
"text": "In general, performance didn't fluctuate substantially between different orders. A plausible assumption is that the last adapted domain would have a large effect on performance, especially on tasks in that domain. This is not the case though; there seems to be no correlation between last adapted domain and task accuracy. Extensive results are presented in Appendix F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Order",
"sec_num": "5.4"
},
{
"text": "So far we have only the case where we continuously adapt to domains in succession. After Domain A, we adapt to Domain B, then C, etc. What happens when we adapt on all domains at the same time? When training sequentially, it is plausible that a later domain will overpower an earlier one. Maybe this will be mitigated by training on all domains jointly. For this experiment, domain datasets are merged into the same training set via the following scheme: the first batch is comprised of Domain A samples, the second batch of Domain B and so on. We observe that performance remained unchanged. We showcase this experiment in Appendix E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Domain Adaptation",
"sec_num": "5.5"
},
{
"text": "In this work we show that domain adaptation can be extended to multiple domains. These multidomain models are able to tackle tasks across various domains with minimal performance drop compared to single-domain models, while using fewer resources and reducing our carbon footprint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In addition, based on Zhao et al. 2020, we can use several finetuned instances of a multidomain model for a number of tasks, with negligible increase in memory usage. So our multidomain models are also beneficial on low-resource devices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Finally, we show that adapting to multiple domains always provides a performance increase and that tasks in low-resource domains receive a boost from multidomain models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In ACL-ARC (Jurgens et al., 2018) , the task is to classify citation intent in excerpts from ACL papers. SciCite (Cohan et al., 2019) is also a citation intent classification task, covering multiple scientific domains. HyperPartisan (Kiesel et al., 2019 ) is a news dataset, where given an article the task is to predict whether it is hyperpartisan (ie. one-sided) or not. In AG-News (Zhang et al., 2015) , we need to predict one of four possible news topics given the article text. In IMDB (Maas et al., 2011) and Clothing Reviews (Brooks, 2018) , given a review text the corresponding rating must be inferred. In SARC (Khodak et al., 2018) , we are tasked with identifying whether a Reddit comment contains sarcasm or not. TalkDown (Wang and Potts, 2019) presents Reddit comment pairs and we are tasked with identifying if the reply is condescending to the original comment. PubMed-RCT (Dernoncourt and Lee, 2017) is a dataset containing sentences from biomedical paper abstracts alongside their role in the abstract (for example, 'background', 'result'), while for ChemProt (Kringelum J, 2016) we are tasked with identifying relations between proteins and chemicals.",
"cite_spans": [
{
"start": 11,
"end": 33,
"text": "(Jurgens et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 113,
"end": 133,
"text": "(Cohan et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 233,
"end": 253,
"text": "(Kiesel et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 384,
"end": 404,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 491,
"end": 510,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 532,
"end": 546,
"text": "(Brooks, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 620,
"end": 641,
"text": "(Khodak et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 734,
"end": 756,
"text": "(Wang and Potts, 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Task Data Overview",
"sec_num": null
},
{
"text": "For domain adaptation, hyperparameters are the same for both setups (single-domain and multidomain), across all domains. Minimal hyperparameter tuning was performed, with the main goal being to keep training as computationally efficient as possible. Batch size and sequence length were set to 32 and the learning rate to 1e-5. Models were trained for a single epoch. The multidomain model was therefore trained for 4 epochs in total, one for each domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Domain Adaptation Hyperparameters",
"sec_num": null
},
{
"text": "Apart from the difference in epochs and the number of classifier neurons, all other hyperparameters are the same for all models during supervised task learning. Learning rate was kept at 4e-5, as suggested in Devlin et al. (2019) . Maximum sequence length was set to 128 while we used 32 batches for training and testing. All hyperparameters were selected upon evaluation on the development sets.",
"cite_spans": [
{
"start": 209,
"end": 229,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Supervised Task Learning Hyperparameters",
"sec_num": null
},
{
"text": "Results for adaptation to an irrelevant domain. In this case, DistilBert is pretrained entirely on Amazon (8GBs). As in Gururangan et al. (2020), we find that when adapting to an irrelevant domain, performance does not increase. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Irrelevant Domain Adaptation Results",
"sec_num": null
},
{
"text": "Here we compare results between sequential (Am-RC-R-Ar) and joint domain adaptation. Results remain similar, showing that there is no substantial difference between the two adaptation methods. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E Joint Domain Adaptation Results",
"sec_num": null
},
{
"text": "Results for adaptation order experiments are given in Tables 5, 6 , 7 and 8 (alphabetical order).",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 65,
"text": "Tables 5, 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "F Domain Adaptation Order Results",
"sec_num": null
},
{
"text": "Code and instructions for data acquisition are available at https://github.com/antmarakis/multidomain green nlp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Accessible online here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments. This work was supported by ERCAdG #740516.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
},
{
"text": "Am-Ar-R-RC Am-Ar-RC-R Am-R-Ar-RC Am-R-RC-Ar Am-RC-Ar-R Am-RC-R-Ar ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Women's e-commerce clothing reviews",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Brooks",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "23--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nick Brooks. 2018. Women's e-commerce clothing re- views. Accessed: 23-06-2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Structural scaffolds for citation intent classification in scientific publications",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Van Zuylen",
"suffix": ""
},
{
"first": "Field",
"middle": [],
"last": "Cady",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Waleed Ammar, Madeleine Van Zuylen, and Field Cady. 2019. Structural scaffolds for cita- tion intent classification in scientific publications. In NAACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A discourse-aware attention model for abstractive summarization of long documents",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Soon",
"middle": [],
"last": "Doo",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Seokhwan",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "615--621",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2097"
]
},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Na- zli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long docu- ments. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts",
"authors": [
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
},
{
"first": "Ji",
"middle": [
"Young"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "308--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence clas- sification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised domain adaptation of contextualized embeddings for sequence labeling",
"authors": [
{
"first": "Xiaochuang",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4238--4248",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1433"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaochuang Han and Jacob Eisenstein. 2019. Unsu- pervised domain adaptation of contextualized em- beddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4238-4248, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ups and downs",
"authors": [
{
"first": "Ruining",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web -WWW '16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2872427.2883037"
]
},
"num": null,
"urls": [],
"raw_text": "Ruining He and Julian McAuley. 2016. Ups and downs. Proceedings of the 25th International Conference on World Wide Web -WWW '16.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measuring the evolution of a scientific field through citation frames",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Raine",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Mc-Farland",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "391--406",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00028"
]
},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Srijan Kumar, Raine Hoover, Dan Mc- Farland, and Dan Jurafsky. 2018. Measuring the evo- lution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics, 6:391-406.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A large self-annotated corpus for sarcasm",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodra- halli. 2018. A large self-annotated corpus for sar- casm. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SemEval-2019 task 4: Hyperpartisan news detection",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Mestre",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Payam",
"middle": [],
"last": "Adineh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "829--839",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2145"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Kiesel, Maria Mestre, Rishabh Shukla, Em- manuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval- 2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 829-839, Minneapo- lis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Chemprot-3.0: a global chemical biology diseases mapping",
"authors": [
{
"first": "T",
"middle": [
"I"
],
"last": "Brunak S Lund O Oprea",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Taboureau O Kringelum",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Kjaerulff",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/bav123"
]
},
"num": null,
"urls": [],
"raw_text": "Brunak S Lund O Oprea TI Taboureau O Kringelum J, Kjaerulff SK. 2016. Chemprot-3.0: a global chemi- cal biology diseases mapping.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Quantifying the carbon emissions of machine learning",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Lacoste",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Luccioni",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Dandres",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "S2ORC: The semantic scholar open research corpus",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Lucy",
"middle": [
"Lu"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Kinney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4969--4983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Daniel Weld. 2020. S2ORC: The semantic scholar open research corpus. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4969-4983, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Inexpensive domain adaptation of pretrained language models: Case studies on biomedical ner and covid-19 qa",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Ulli",
"middle": [],
"last": "Waltinger",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Poerner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2020. Inexpensive domain adaptation of pretrained language models: Case studies on biomedical ner and covid-19 qa.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rietzler",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Stabinger",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Engl",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4933--4941",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or get left behind: Domain adaptation through BERT language model finetuning for aspect-target sentiment classification. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4933-4941, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Movement pruning: Adaptive sparsity by finetuning",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. Movement pruning: Adaptive sparsity by fine- tuning.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Energy and policy considerations for deep learning in NLP",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Ananya",
"middle": [],
"last": "Ganesh",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3645--3650",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1355"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mobilebert: a compact task-agnostic bert for resource",
"authors": [
{
"first": "Zhiqing",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongkun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Denny",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic bert for resource-limited de- vices.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Tl;dr: Mining Reddit to learn automatic summarization",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "V\u00f6lske",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Workshop on New Frontiers in Summarization",
"volume": "",
"issue": "",
"pages": "59--63",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4508"
]
},
"num": null,
"urls": [],
"raw_text": "Michael V\u00f6lske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. Tl;dr: Mining Reddit to learn au- tomatic summarization. In Proceedings of the Work- shop on New Frontiers in Summarization, pages 59- 63, Copenhagen, Denmark. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "TalkDown: A corpus for condescension detection in context",
"authors": [
{
"first": "Zijian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3711--3719",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1385"
]
},
"num": null,
"urls": [],
"raw_text": "Zijian Wang and Christopher Potts. 2019. TalkDown: A corpus for condescension detection in context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3711- 3719, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Defending against neural fake news",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Roesner",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "9054--9065",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alch\u00e9 Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 9054-9065. Curran Associates, Inc.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Masking as an efficient alternative to finetuning for pretrained language models",
"authors": [
{
"first": "Mengjie",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengjie Zhao, Tao Lin, Martin Jaggi, and Hinrich Sch\u00fctze. 2020. Masking as an efficient alternative to finetuning for pretrained language models.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"text": "Accuracy in percentage for task/model combinations. Standard deviation is shown in subscript (based on three runs). With Base we denote the original model, with Single the model trained on the corresponding domain and with Multi the multidomain model. For the Single model, we show the best accuracy out of all the models in MNLI.",
"content": "<table/>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "Comparison between our main model (Multi) and a model pretrained entirely on Amazon data.",
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "Comparison of sequential and joint models.",
"content": "<table/>"
}
}
}
}