{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:07.258467Z" }, "title": "On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets", "authors": [ { "first": "Sara", "middle": [], "last": "Meftah", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris-Saclay", "location": { "postCode": "F-91120", "settlement": "Palaiseau", "country": "France" } }, "email": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris-Saclay", "location": { "postCode": "F-91120", "settlement": "Palaiseau", "country": "France" } }, "email": "" }, { "first": "Youssef", "middle": [], "last": "Tamaazousti", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris-Saclay", "location": { "postCode": "F-91120", "settlement": "Palaiseau", "country": "France" } }, "email": "" }, { "first": "Hassane", "middle": [], "last": "Essafi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 Paris-Saclay", "location": { "postCode": "F-91120", "settlement": "Palaiseau", "country": "France" } }, "email": "" }, { "first": "Fatiha", "middle": [], "last": "Sadat", "suffix": "", "affiliation": { "laboratory": "", "institution": "UQ\u00c0M", "location": { "settlement": "Montr\u00e9al", "country": "Canada" } }, "email": "sadat.fatiha@uqam.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is performed between less related domains, it brings a negative transfer, i.e. it hurts the transfer performance. In this research, we shed light on the hidden negative transfer occurring when transferring from the News domain to the Tweets domain, through quantitative and qualitative analysis. Our experiments on three NLP tasks: Part-Of-Speech tagging, Chunking and Named Entity recognition reveal interesting insights.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is performed between less related domains, it brings a negative transfer, i.e. it hurts the transfer performance. In this research, we shed light on the hidden negative transfer occurring when transferring from the News domain to the Tweets domain, through quantitative and qualitative analysis. Our experiments on three NLP tasks: Part-Of-Speech tagging, Chunking and Named Entity recognition reveal interesting insights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "High performing NLP neural tools often require huge volumes of annotated data to produce powerful models and prevent over-fitting. Consequently, in the case of social media content (informal texts) such as Tweets, it is difficult to achieve the performance of state-of-the-art neural models on News (formal texts).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The last few years have witnessed an escalated interest in studying Transfer Learning (TL) for neural networks to overcome the problem of the lack of annotated data. TL aims at performing a task on a target dataset using features learned from a source dataset (Pan and Yang, 2009) . TL has been proven to be effective for a wide range of applications (Zamir et al., 2018; Long et al., 2015; Moon and Carbonell, 2017) , especially for low-resourced domains.", "cite_spans": [ { "start": 260, "end": 280, "text": "(Pan and Yang, 2009)", "ref_id": "BIBREF15" }, { "start": 351, "end": 371, "text": "(Zamir et al., 2018;", "ref_id": "BIBREF26" }, { "start": 372, "end": 390, "text": "Long et al., 2015;", "ref_id": "BIBREF8" }, { "start": 391, "end": 416, "text": "Moon and Carbonell, 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, it has been shown in many works in the literature (Rosenstein et al., 2005; Ge et al., 2014; Ruder, 2019; Gui et al., 2018a; Cao et al., 2018; Chen et al., 2019; O'Neill, 2019) that, when source and target domains are less related (e.g. languages from different families), sequential transfer learning may lead to a negative effect on the performance, instead of improving it. This phenomenon is referred to as negative transfer. Precisely, negative transfer is considered when transfer learning is harmful for the target task/dataset, i.e. the performance when using transfer learning algorithm is lower than that with a solely supervised training on in-target data (Torrey and Shavlik, 2010) .", "cite_spans": [ { "start": 59, "end": 84, "text": "(Rosenstein et al., 2005;", "ref_id": "BIBREF19" }, { "start": 85, "end": 101, "text": "Ge et al., 2014;", "ref_id": "BIBREF3" }, { "start": 102, "end": 114, "text": "Ruder, 2019;", "ref_id": "BIBREF20" }, { "start": 115, "end": 133, "text": "Gui et al., 2018a;", "ref_id": "BIBREF4" }, { "start": 134, "end": 151, "text": "Cao et al., 2018;", "ref_id": "BIBREF0" }, { "start": 152, "end": 170, "text": "Chen et al., 2019;", "ref_id": "BIBREF1" }, { "start": 171, "end": 185, "text": "O'Neill, 2019)", "ref_id": "BIBREF13" }, { "start": 676, "end": 702, "text": "(Torrey and Shavlik, 2010)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several works (Gui et al., 2017 (Gui et al., , 2018b Meftah et al., 2018a,b; M\u00e4rz et al., 2019) have shown that sequential transfer learning from the News resource-rich domain to the Tweets low-resource domain enhances the performance of sequence labelling of Tweets. Hence, following the above definition of negative transfer, transfer learning from News to Tweets does not beget a negative transfer. Contrariwise, in this work, we rather consider the hidden negative transfer, i.e. the percentage of predictions which were correctly tagged by random initialisation, but using transfer learning falsified.", "cite_spans": [ { "start": 14, "end": 31, "text": "(Gui et al., 2017", "ref_id": "BIBREF6" }, { "start": 32, "end": 52, "text": "(Gui et al., , 2018b", "ref_id": "BIBREF5" }, { "start": 53, "end": 76, "text": "Meftah et al., 2018a,b;", "ref_id": null }, { "start": 77, "end": 95, "text": "M\u00e4rz et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we take a step towards identifying and analysing the impact of transfer from News to Tweets. Precisely, we perform an empirical analysis to investigate the hidden negative transfer. First, we show in section.5.1 that, the final gain brought by TL can be separated into two categories: positive transfer and negative transfer. We define positive transfer as the percentage of tokens that were wrongly predicted by random initialisation, but the TL changed to the correct ones. In comparison, negative transfer represents the percentage of words which were tagged correctly by random initialisation, but using TL gives wrong predictions. Then, in section.5.2, we study the impact of pretraining state on negative and positive transfer. Finally, in section.5.3, we provide some qualitative examples of negative transfer. Our experiments on three NLP tasks (Part-Of-Speech tagging (POS), Chunking (CK) and Named Entity recognition (NER)) reveal interesting insights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of the paper is organised as follow. We first present, briefly, the sequence tagging neural model ( \u00a72). Then, we describe the sequential transfer learning method ( \u00a73), followed by a short presentation of the involved datasets and tasks ( \u00a74.1). Then, we report the results of our analysis to highlight the hidden negative transfer occurring when transferring from News to Tweets ( \u00a75). Finally, we wrap up with a conclusion and future work ( \u00a76).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We perform experiments on 3 Sequence Tagging (ST) tasks: Part-Of-Speech tagging (POS), Chunking (CK) and Named Entity Recognition (NER). Given an input sentence of n successive tokens S = [w 1 , . . . , w n ], the goal of a ST model is to predict the tag c i \u2208 C of every w i , with C being the tag-set. We use a common ST neural model. It includes three main components. First, we have a WRE (Word Representation Extractor) to build, for each word w i , a final representation x i combining two hybrid representations; a word-level embedding (denoted \u03a5 word ) and a character-level embedding based on a bidirectional-Long Short-Term Memory (biLSTMs) encoder (denoted \u03a5 char ). Second, the x i representation is fed into a Features Extractor (FE) (denoted \u03a6) based on a single-layer BiLSTMs network, to produce a hidden representation h i which constitutes the input of the Classifier (denoted \u03a8): a fully-connected (FC) layer used for classification. Formally, given w i , the predictions are obtained using the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagging Neural Architecture", "sec_num": "2" }, { "text": "w i :\u0177 i = (\u03a8 \u2022 \u03a6 \u2022 \u03a5)(w i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagging Neural Architecture", "sec_num": "2" }, { "text": "With \u03a5 ensuring the concatenation of \u03a5 char and \u03a5 word . 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagging Neural Architecture", "sec_num": "2" }, { "text": "We use a simple sequential TL method to transfer knowledge from the News domain to the Tweetsdomain. It consists in learning a source model on the source task with enough data from the News domain, then transferring a part of the learned parameters to initialise the target model, which is further fine-tuned on the target task with few training examples from the Tweets domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning Method", "sec_num": "3" }, { "text": "Specifically, in this work we perform the TL following three simple yet effective steps: 1) The source model is learnt using a large annotated dataset from the source domain. 2) We transfer to the target model the first set of parameters (\u03a5 and \u03a6) of the source model, while the second set of parameters (\u03a8) of the target model is randomly initialised. Then, 3) the target model is further fine-tuned on the small target data-set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transfer Learning Method", "sec_num": "3" }, { "text": "We conduct experiments on TL from English News (source-domain) to English Tweets (target-domain) on three tasks (Datasets statistics are given in Table.1):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data-sets", "sec_num": "4.1" }, { "text": "\u2022 POS tagging: we use the Wall Street Journal (WSJ) part of Penn-Tree-Bank (PTB) as a source-dataset. Regarding the target-datasets, we used three Tweets datasets: TPoS (Ritter et al., 2011) , ARK (Owoputi et al., 2013) and TweeBank (Liu et al., 2018 ).", "cite_spans": [ { "start": 169, "end": 190, "text": "(Ritter et al., 2011)", "ref_id": "BIBREF18" }, { "start": 197, "end": 219, "text": "(Owoputi et al., 2013)", "ref_id": "BIBREF14" }, { "start": 233, "end": 250, "text": "(Liu et al., 2018", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Data-sets", "sec_num": "4.1" }, { "text": "\u2022 CK: for the source dataset, we use the CONLL2000 shared task's English data-set (Tjong Kim Sang and Buchholz, 2000) . Regarding the target dataset, we use TChunk Tweets data-set (Ritter et al., 2011 ) (the same corpus as TPoS).", "cite_spans": [ { "start": 93, "end": 117, "text": "Sang and Buchholz, 2000)", "ref_id": "BIBREF22" }, { "start": 180, "end": 200, "text": "(Ritter et al., 2011", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Data-sets", "sec_num": "4.1" }, { "text": "\u2022 NER: regarding the source domain, we make use of the English newswire dataset CONLL-03 from the CONLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) . target domain, we conduct our experiments on WNUT2017 dataset (Derczynski et al., 2017) .", "cite_spans": [ { "start": 132, "end": 158, "text": "Sang and De Meulder, 2003)", "ref_id": "BIBREF23" }, { "start": 223, "end": 248, "text": "(Derczynski et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data-sets", "sec_num": "4.1" }, { "text": "In the standard word-level embeddings, tokens are converted to lower-case while the character-level component still retains access to the capitalisation information. We set the randomly initialised character embedding dimension at 50, the dimension of hidden states of the character-level biLSTM at 100 and used 300-dimensional word-level embeddings. Word-level embeddings were pre-loaded from publicly available GloVe vectors pre-trained on 42 billions words collected through web crawling and containing 1.9M different words (Pennington et al., 2014) . These embeddings are also updated during training. For the FE component, we use a single layer biLSTM (token-level feature extractor) and set the number of units to 200. In all of our experiments, both pretraining and fine-tuning were preformed using the same training settings, i.e. SGD with momentum and early stopping, and mini-batches of 16 sentences, and a fixed learning rate of 1.5 \u00d7 10 \u22122 . Throughout this thesis, all our models are implemented with the PyTorch library (Paszke et al., 2017) .", "cite_spans": [ { "start": 527, "end": 552, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" }, { "start": 1034, "end": 1055, "text": "(Paszke et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.2" }, { "text": "First, in order to have an idea about the final impact of TL compared to randomly initialised models, we provide in Table. 2 the performance of Random Initialisation and Transfer Learning. Clearly, TL enhances the performance across all data-sets and tasks. In the following sub-sections, we attempt to analyse thoroughly these results by showing that the impact of TL is two fold, positive transfer and negative transfer.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 122, "text": "Table.", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "Let us consider the gain G i brought by transfer learning compared to random initialisation for the dataset i. G i is defined as the difference between positive transfer PT i and negative transfer N T i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "G i = PT i \u2212 N T i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "Where positive transfer PT i represents the percentage of tokens that were wrongly predicted by random initialisation, but transfer learning changed to the correct ones. negative transfer N T i represents the percentage of words which were tagged correctly by random initialisation, but using transfer learning gives wrong predictions. PT i and N T i are defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "PT i = N corrected i N i and N T i = N f alsif ied i N i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": ". Where Figure 1 : Impact on predictions made by TL compared to Random initialisation. Positive Transfer stands for the percentage of predictions that were wrong in the training from scratch scheme but the TL changed to the correct ones, and Negative Transfer stands for the percentage of predictions which the random model tagged correctly, but the TL falsified.", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 16, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "N i the total number of tokens in the validation-set of the dataset i . N corrected i is the number of tokens from the validation-set of the dataset i , that were wrongly tagged by the the model trained from scratch but are correctly predicted by the model using transfer learning. And N f alsif ied i is the number of tokens from the validation-set of the dataset i , that were correctly tagged by the the model trained from scratch but are wrongly predicted by the model using transfer learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "We show in Figure. 1 the results on English Tweets datasets TpoS, ArK and TweeBank for POS; WNUT for NER; and Tchunk for CK. First tagged with the classic training scheme (Random) and then using TL. Blue bars show the percentage of positive transfer, and red bars give the percentage of negative transfer. We observe that even though TL approach is effective, since the resulting positive transfer is higher than negative transfer in all cases, this last mitigates the final gain brought by TL. For instance, on Tchunk dataset, TL corrected \u223c4.7% of predictions but falsified \u223c1.7%, which reduces the final gain to \u223c3%.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Figure.", "ref_id": null } ], "eq_spans": [], "section": "Quantifying Negative Transfer", "sec_num": "5.1" }, { "text": "So far in our experiments we used the pretrained parameters from the best model trained on the source dataset. In simple words, we picked the model at the epoch with the highest performance on the source validation-set. In this analysis, we study when pretrained parameters are ready to be transferred. Specifically, we pick the pretrained weights at different pretraining epochs; that we call the pretraining states. Then, we assess the performance when transferring each. In Figure. 2, we plot for each target dataset, the curves of positive transfer (blue curves) and negative transfer (red curves) brought by initialisation with pretrained weights from different pretraining epochs compared to random initialisation. Clearly, both negative and positive transfer increase with pretraining epochs. More important, we can observe that for TweeBank and ArK datasets the negative transfer increases rapidly in the last pretraining epochs. However, for TPoS dataset, the negative transfer stays almost stable throughout pretraining epochs. This phenomenon could be explained by the fact that TPoS shares the same PTB tag-set as WSJ, whereas TweeBank and ArK use different tag-sets. Consequently, in the last states of pretraining, the pretrained parameters become well-tuned to the source dataset and specific to the source tag-set, leading to an increase of negative transfer and thus a drop in transfer performance.", "cite_spans": [], "ref_spans": [ { "start": 477, "end": 484, "text": "Figure.", "ref_id": null } ], "eq_spans": [], "section": "The impact of pretraining state on Negative Transfer", "sec_num": "5.2" }, { "text": "We illustrate in Table 3 2 concrete examples of words whose predictions were falsified when using transfer learning compared to random initialisation. Among mistakes we have observed:", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "\u2022 Tokens with an upper-cased first letter: In news (formal English), only proper nouns start with an upper-case letter inside sentences. Consequently, in the transfer learning scheme, the pre-trained units fail to slough this pattern which is not always respected in social media.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "Hence, we found that most of the tokens with an upper-cased first letter are mistakenly predicted as proper nouns (PROPN) in POS, e.g. Award, Charity, Night, etc. and as entities in NER, e.g. Father, Hey, etc., which is consistent with the findings of Seah et al. (2012) ; negative transfer is mainly due to conditional distribution differences between source and target domains.", "cite_spans": [ { "start": 252, "end": 270, "text": "Seah et al. (2012)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "\u2022 Contractions are frequently used in social media to shorten a set of words. For instance, in TPoS dataset, we found that \"'s\" is in most cases predicted as a \"possessive ending (pos)\" instead of \"Verb, 3rd person singular present (vbz)\". Indeed, in formal English, \"'s\" is used in most cases to express the possessive form, e.g. \"company's decision\", but rarely in contractions that are frequently used in social media, e.g. \"How's it going with you?\". Similarly, \"wont\" is a frequent contraction for Table 3 : Examples of falsified predictions by standard fine-tuning scheme when transferring from Newsdomain to Tweets-domain. Line 1: Some words from the validation-set of each data-set. Line 2: Correct labels predicted by the classic supervised setting (Random-200). Line 3: Wrong labels predicted by TL setting. Mistake type: for words with first capital letter, \u2022 for misspelling, for contractions, \u00d7 for abbreviations.", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 510, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "\"will not\", e.g. \"i wont get bday money lool\", predicted as \"verb\" instead of \"modal (MD)\" by transfer learning. The same for \"id\", which stands for \"I would\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "\u2022 Abbreviations are frequently used in social media to shorten the way a word is standardly written. We found that transfer learning scheme stumbles on abbreviations predictions, e.g. 2pac (Tupac), 2 (to), ur (your), wth (what the hell) and nvr (never) in ArK dataset; and luv (love) and wyd (what you doing?) in TChunk dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "\u2022 Misspellings: Likewise, we found that the transfer learning scheme often gives wrong predictions for misspelt words, e.g. awsome, bout, amazin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Examples of Negative Transfer:", "sec_num": "5.3" }, { "text": "Our analysis on the hidden negative transfer from News-domain to Tweets-domain reveals interesting insights: 1) Even if using TL improves the performance on Tweets Sequence labelling, an inherent negative transfer may minimise the final gain; and 2) the negative transfer increases with the number of pretraining epochs. This study opens a set of promising directions. We plan to 1) Extend our experiments by investigating the impact of the model's hyper-parameters (size, activation functions, learning rate, etc.). 2) Investigate the impact of the similarity between source and target datasets and source and target training datasets size on the negative transfer. 3) Tackle the negative transfer problem, by identifying automatically biased neurons in the pretrained model and proceed to a pruning of the most biased ones before fine-tuning. 4) Explore negative transfer on Transformers-based pretrained models, such as BERT, XLNet, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Note that -for simplicity -, we define\u0177i only as a function of wi, but in reality\u0177i is a function of all words in the sentence, thanks to the biLSTMs component.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Classes significations: nn=N=noun=common noun / nnp=pnoun=propn=proper noun / vbz=Verb, 3rd person singular present / pos=possessive ending / prp=personal pronoun / prp$=possessive pronoun / md=modal / VBP=Verb, non-3rd person singular present / uh=!=intj=interjection / rb=R=adverb / L=nominal + verbal or verbal + nominal / E=emoticon / $=numerical / P=pre-or postposition, or subordinating conjunction / Z=proper noun + possessive ending / V=verb / adj=adjective / adp=adposition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Partial transfer learning with selective adversarial networks", "authors": [ { "first": "Zhangjie", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2724--2732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Michael I Jordan. 2018. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 2724-2732.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning", "authors": [ { "first": "Xinyang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sinan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1908--1918", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyang Chen, Sinan Wang, Bo Fu, Mingsheng Long, and Jianmin Wang. 2019. Catastrophic forgetting meets negative transfer: Batch spectral shrinkage for safe transfer learning. In Advances in Neural Infor- mation Processing Systems, pages 1908-1918.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Results of the wnut2017 shared task on novel and emerging entity recognition", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Nichols", "suffix": "" }, { "first": "Marieke", "middle": [], "last": "Van Erp", "suffix": "" }, { "first": "Nut", "middle": [], "last": "Limsopatham", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 3rd Workshop on Noisy User-generated Text", "volume": "", "issue": "", "pages": "140--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the wnut2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On handling negative transfer and imbalanced distributions in multiple source transfer learning. Statistical Analysis and Data Mining", "authors": [ { "first": "Jing", "middle": [], "last": "Liang Ge", "suffix": "" }, { "first": "Hung", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Ngo", "suffix": "" }, { "first": "Aidong", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "The ASA Data Science Journal", "volume": "7", "issue": "4", "pages": "254--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Ge, Jing Gao, Hung Ngo, Kang Li, and Aidong Zhang. 2014. On handling negative transfer and imbalanced distributions in multiple source transfer learning. Statistical Analysis and Data Mining: The ASA Data Science Journal, 7(4):254-271.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Negative transfer detection in transductive transfer learning", "authors": [ { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jiachen", "middle": [], "last": "Du", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "International Journal of Machine Learning and Cybernetics", "volume": "9", "issue": "2", "pages": "185--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin Gui, Ruifeng Xu, Qin Lu, Jiachen Du, and Yu Zhou. 2018a. Negative transfer detection in transductive transfer learning. International Journal of Machine Learning and Cybernetics, 9(2):185-197.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Transferring from formal newswire domain with hypernet for twitter pos tagging", "authors": [ { "first": "Tao", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Minlong", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Di", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Keyu", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Xuan-Jing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2540--2549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Gui, Qi Zhang, Jingjing Gong, Minlong Peng, Di Liang, Keyu Ding, and Xuan-Jing Huang. 2018b. Transferring from formal newswire domain with hy- pernet for twitter pos tagging. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2540-2549.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Part-of-speech tagging for twitter with adversarial neural networks", "authors": [ { "first": "Tao", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Minlong", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2411--2420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, and Xuanjing Huang. 2017. Part-of-speech tagging for twitter with adversarial neural networks. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2411-2420.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Parsing tweets into universal dependencies", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "965--975", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A Smith. 2018. Parsing tweets into universal dependencies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 965-975.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Learning transferable features with deep adaptation networks", "authors": [ { "first": "Mingsheng", "middle": [], "last": "Long", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2015, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "97--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable fea- tures with deep adaptation networks. In Interna- tional conference on machine learning, pages 97- 105.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Domain adaptation for part-of-speech tagging of noisy user-generated text", "authors": [ { "first": "Luisa", "middle": [], "last": "M\u00e4rz", "suffix": "" }, { "first": "Dietrich", "middle": [], "last": "Trautmann", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3415--3420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa M\u00e4rz, Dietrich Trautmann, and Benjamin Roth. 2019. Domain adaptation for part-of-speech tagging of noisy user-generated text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 3415-3420.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A neural network model for part-of-speech tagging of social media texts", "authors": [ { "first": "Sara", "middle": [], "last": "Meftah", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Fatiha", "middle": [], "last": "Sadat", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Meftah, Nasredine Semmar, and Fatiha Sadat. 2018a. A neural network model for part-of-speech tagging of social media texts. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using neural transfer learning for morpho-syntactic tagging of southslavic languages tweets", "authors": [ { "first": "Sara", "middle": [], "last": "Meftah", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "Fatiha", "middle": [], "last": "Sadat", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Raaijmakers", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", "volume": "", "issue": "", "pages": "235--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Meftah, Nasredine Semmar, Fatiha Sadat, and Stephan Raaijmakers. 2018b. Using neural trans- fer learning for morpho-syntactic tagging of south- slavic languages tweets. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 235-243.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Completely heterogeneous transfer learning with attention-what and what not to transfer", "authors": [ { "first": "Seungwhan", "middle": [], "last": "Moon", "suffix": "" }, { "first": "G", "middle": [], "last": "Jaime", "suffix": "" }, { "first": "", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "1", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seungwhan Moon and Jaime G Carbonell. 2017. Completely heterogeneous transfer learning with attention-what and what not to transfer. In IJCAI, volume 1, pages 1-2.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to avoid negative transfer in few shot transfer learning", "authors": [ { "first": "O'", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Neill", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James O'Neill. 2019. Learning to avoid negative trans- fer in few shot transfer learning.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved part-of-speech tagging for online conversational text with word clusters", "authors": [ { "first": "Olutobi", "middle": [], "last": "Owoputi", "suffix": "" }, { "first": "O'", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: human language technologies", "volume": "", "issue": "", "pages": "380--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 conference of the North American chapter of the association for computa- tional linguistics: human language technologies, pages 380-390.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A survey on transfer learning", "authors": [ { "first": "Qiang", "middle": [], "last": "Sinno Jialin Pan", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2009, "venue": "IEEE Transactions on knowledge and data engineering", "volume": "22", "issue": "10", "pages": "1345--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic differentiation in pytorch", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Soumith", "middle": [], "last": "Chintala", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Named entity recognition in tweets: an experimental study", "authors": [ { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1524--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the conference on empiri- cal methods in natural language processing, pages 1524-1534. Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "To transfer or not to transfer", "authors": [ { "first": "Zvika", "middle": [], "last": "Michael T Rosenstein", "suffix": "" }, { "first": "Leslie", "middle": [ "Pack" ], "last": "Marx", "suffix": "" }, { "first": "Thomas", "middle": [ "G" ], "last": "Kaelbling", "suffix": "" }, { "first": "", "middle": [], "last": "Dietterich", "suffix": "" } ], "year": 2005, "venue": "NIPS'05 Workshop, Inductive Transfer: 10 Years Later", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael T Rosenstein, Zvika Marx, Leslie Pack Kael- bling, and Thomas G Dietterich. 2005. To transfer or not to transfer. In In NIPS'05 Workshop, Induc- tive Transfer: 10 Years Later. Citeseer.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural Transfer Learning for Natural Language Processing", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder. 2019. Neural Transfer Learning for Natural Language Processing. Ph.D. thesis, NA- TIONAL UNIVERSITY OF IRELAND, GALWAY.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Combating negative transfer from predictive distribution differences", "authors": [ { "first": "Chun-Wei", "middle": [], "last": "Seah", "suffix": "" }, { "first": "Yew-Soon", "middle": [], "last": "Ong", "suffix": "" }, { "first": "Ivor", "middle": [ "W" ], "last": "Tsang", "suffix": "" } ], "year": 2012, "venue": "IEEE transactions on cybernetics", "volume": "43", "issue": "4", "pages": "1153--1165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chun-Wei Seah, Yew-Soon Ong, and Ivor W Tsang. 2012. Combating negative transfer from predictive distribution differences. IEEE transactions on cy- bernetics, 43(4):1153-1165.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Introduction to the conll-2000 shared task: chunking", "authors": [ { "first": "Erik F Tjong Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning", "volume": "7", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Tjong Kim Sang and Sabine Buchholz. 2000. In- troduction to the conll-2000 shared task: chunking. In Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Compu- tational natural language learning-Volume 7, pages 127-132.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Introduction to the conll-2003 shared task: languageindependent named entity recognition", "authors": [ { "first": "Erik F Tjong Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", "volume": "4", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: language- independent named entity recognition. In Proceed- ings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142- 147.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Transfer learning", "authors": [ { "first": "Lisa", "middle": [], "last": "Torrey", "suffix": "" }, { "first": "Jude", "middle": [], "last": "Shavlik", "suffix": "" } ], "year": 2010, "venue": "Handbook of research on machine learning applications and trends: algorithms, methods, and techniques", "volume": "", "issue": "", "pages": "242--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Torrey and Jude Shavlik. 2010. Transfer learn- ing. In Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, pages 242-264. IGI global.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Characterizing and avoiding negative transfer", "authors": [ { "first": "Zirui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Barnab\u00e1s", "middle": [], "last": "P\u00f3czos", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "11293--11302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zirui Wang, Zihang Dai, Barnab\u00e1s P\u00f3czos, and Jaime Carbonell. 2019. Characterizing and avoiding nega- tive transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11293-11302.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Taskonomy: Disentangling task transfer learning", "authors": [ { "first": "Alexander", "middle": [], "last": "Amir R Zamir", "suffix": "" }, { "first": "William", "middle": [], "last": "Sax", "suffix": "" }, { "first": "Leonidas", "middle": [ "J" ], "last": "Shen", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Guibas", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Malik", "suffix": "" }, { "first": "", "middle": [], "last": "Savarese", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "3712--3722", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 3712-3722.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Positive transfer curves (blue) and negative transfer curves (red) on Tweets data-sets, according to different pretraining epochs. Transparent Gray highlights the final gain brought by TL.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "content": "
TaskClasses SourcesEval. MetricsSplits (train -val -test)
POS: POS Tagging36 WSJTop-1 Acc.912,344 -131,768 -129,654
CK: Chunking22 CONLL-2000 Top-1 Exact-match F1. 211,727 -n/a -47,377
NER: Named Entity Recognition4 CONLL-2003 Top-1 Exact-match F1. 203,621 -51,362 -46,435
POS: POS Tagging17 TweeBankTop-1 Acc.24,753 -11,742 -19,112
CK: Chunking18 TChunkTop-1 Exact-match F1. 10,652 -2,242 -2,291
NER: Named Entity Recognition6 WNUTTop-1
", "html": null, "text": "394", "num": null, "type_str": "table" }, "TABREF1": { "content": "
MethodPOS (Accuracy %) TPoS ArK TweeBankCK (Accuracy %) NER (F1 %) TChunk WNUT
Random Initialisation 86.82 91.1091.6685.9640.36
Transfer Learning89.57 92.0993.2388.8641.92
", "html": null, "text": "Statistics of the datasets we used to train our models. Top: datasets of the source domain. Bottom: datasets of the target domain.", "num": null, "type_str": "table" }, "TABREF2": { "content": "", "html": null, "text": "Results on POS, CK and NER of Tweets using Transfer Learning vs Random initialisation.", "num": null, "type_str": "table" } } } }