{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:18.310966Z" }, "title": "How Far Can We Go with Data Selection? A Case Study on Semantic Sequence Tagging Tasks", "authors": [ { "first": "Samuel", "middle": [], "last": "Louvan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "slouvan@fbk.eu" }, { "first": "Fondazione", "middle": [ "Bruno" ], "last": "Kessler", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "" }, { "first": "Bernardo", "middle": [ "Magnini" ], "last": "Fondazione", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "" }, { "first": "Bruno", "middle": [], "last": "Kessler", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Although several works have addressed the role of data selection to improve transfer learning for various NLP tasks, there is no consensus about its real benefits and, more generally, there is a lack of shared practices on how it can be best applied. We propose a systematic approach aimed at evaluating data selection in scenarios of increasing complexity. Specifically, we compare the case in which source and target tasks are the same while source and target domains are different, against the more challenging scenario where both tasks and domains are different. We run a number of experiments on semantic sequence tagging tasks, which are relatively less investigated in data selection, and conclude that data selection has more benefit on the scenario when the tasks are the same, while in case of different (although related) tasks from distant domains, a combination of data selection and multi-task learning is ineffective for most cases.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Although several works have addressed the role of data selection to improve transfer learning for various NLP tasks, there is no consensus about its real benefits and, more generally, there is a lack of shared practices on how it can be best applied. We propose a systematic approach aimed at evaluating data selection in scenarios of increasing complexity. Specifically, we compare the case in which source and target tasks are the same while source and target domains are different, against the more challenging scenario where both tasks and domains are different. We run a number of experiments on semantic sequence tagging tasks, which are relatively less investigated in data selection, and conclude that data selection has more benefit on the scenario when the tasks are the same, while in case of different (although related) tasks from distant domains, a combination of data selection and multi-task learning is ineffective for most cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transfer learning is a common approach for training NLP models that scale across different tasks, domains, and languages. One of the challenges in transfer learning is to deal with the data distribution mismatch between the source (D S ) and the target data (D T ) (Rosenstein et al., 2005) . One solution to alleviate the impact of the mismatch is using data selection, a process for selecting relevant training instances from the source data. Data selection (DS) has been applied in the context of domain adaptation to address changes in the data distribution for various NLP tasks, such as sentiment analysis and POS Tagging (Ruder and Plank, 2017; Liu et al., 2019; Blitzer et al., 2007; Remus, 2012) , machine translation (Axelrod et al., 2011) , dependency parsing (S\u00f8gaard, 2011) and Named Entity Recognition (NER) (Murthy et al., 2018; Zhao et al., 2018) . To our knowledge, all existing previous works apply data selection to different domains, while maintaining the same task.", "cite_spans": [ { "start": 265, "end": 290, "text": "(Rosenstein et al., 2005)", "ref_id": "BIBREF16" }, { "start": 460, "end": 464, "text": "(DS)", "ref_id": null }, { "start": 628, "end": 651, "text": "(Ruder and Plank, 2017;", "ref_id": "BIBREF17" }, { "start": 652, "end": 669, "text": "Liu et al., 2019;", "ref_id": "BIBREF8" }, { "start": 670, "end": 691, "text": "Blitzer et al., 2007;", "ref_id": "BIBREF1" }, { "start": 692, "end": 704, "text": "Remus, 2012)", "ref_id": "BIBREF15" }, { "start": 727, "end": 749, "text": "(Axelrod et al., 2011)", "ref_id": "BIBREF0" }, { "start": 771, "end": 786, "text": "(S\u00f8gaard, 2011)", "ref_id": "BIBREF20" }, { "start": 822, "end": 843, "text": "(Murthy et al., 2018;", "ref_id": "BIBREF10" }, { "start": 844, "end": 862, "text": "Zhao et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work we aim to investigate the benefit of data selection in a more complex setting, where we have not only different domains (D S = D T ), but also different tasks (T S = T T ). Intuitively, such setting may bring advantage in situations where large training data are available for a source task T S , and we want to exploit such data for a different (although related) target task T T , where much less training is available. We experiment with the situation where T S is Named Entity Recognition (NER) on a general domain, where several datasets are available, and T T is slot tagging (ST) in the context of utterance interpretation for dialogue systems, where much less data is available. Both of the tasks are rarely investigated in data selection and there is no consensus about the benefit of data selection for them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose an experimental framework where we can compare data selection settings with an increasing level of complexity. First, we consider data selection where NER is both the source and target task, and apply transfer learning from different domains: we call this setting Same Tasks from Different Domains (STDD), T S = T T and D S = D T . In a second, more complex setting, we consider NER as the source task and ST as the target: this is called T S = T T and D S = D T , Different Tasks from Different Domains (DTDD). In this scenario, as we have disjoint label space between the source and the target task, we combine the data selection process with multi-task learning (MTL). To our knowledge, this combination has received very little attention in the literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We base our work on the data selection framework proposed by Ruder and Plank (2017) , and apply it to our experimental settings. Their framework is model-agnostic and has shown significant advantage in sentiment analysis, POS tagging, and parsing. However, it is not obvious to what extent the selection process can actually help on semantic sequence tagging tasks on STDD and DTDD scenarios. The contributions of the paper are the following: (i) we apply previous work to multi-task learning setup to evaluate the effectiveness of data selection in DTDD scenarios; (ii) we systematically compare data selection on settings of increasing complexity, and observe that existing selection metrics do not show clear advantages over baselines in most cases. Nevertheless, data selection has more potential in STDD when source and target are more similar, while combining MTL and data selection for DTDD is ineffective for most cases in our experimental settings in which we have different but related tasks (NER and ST) from relatively distant domains (news and conversational domains).", "cite_spans": [ { "start": 61, "end": 83, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general, the goal of data selection is to select an optimal subset of training instances, X * S , from all the available data X S in T S , to be used for training the model for the target task M T T . Given the source data", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": "X S = {x S 1 , x S 2 , ..., x S n }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": ", each instance is ranked according to a score S and the top m examples are then used to train M T T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": "We apply the data selection approach from Ruder and Plank (2017), based on Bayesian Optimization (BO) (Brochu et al., 2010) , to evaluate the effectiveness of data selection on both the STDD and DTDD scenarios. Specifically, for DTDD we combine data selection and multi-task learning. Given X S , the framework performs data selection based on a score S derived from a set of features. The top m examples are then used to train M T T . In case of STDD, the M T T is a single task sequence tagging model, where we use a biLSTM-CRF model (Lample et al., 2016) . As for DTDD, M T T is a hard parameter sharing MTL model, which has been applied to many NLP tasks Plank et al., 2016; Changpinyo et al., 2018; Schulz et al., 2018) . The performance on the validation set of the target task is then used by the BO optimizer to update the weight of the scoring features.", "cite_spans": [ { "start": 102, "end": 123, "text": "(Brochu et al., 2010)", "ref_id": "BIBREF2" }, { "start": 536, "end": 557, "text": "(Lample et al., 2016)", "ref_id": "BIBREF5" }, { "start": 659, "end": 678, "text": "Plank et al., 2016;", "ref_id": "BIBREF11" }, { "start": 679, "end": 703, "text": "Changpinyo et al., 2018;", "ref_id": "BIBREF3" }, { "start": 704, "end": 724, "text": "Schulz et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": "Following Ruder and Plank (2017) , the selection process is based on a score S computed as the linear combination of weighted features, which include both similarity and diversity features:", "cite_spans": [ { "start": 10, "end": 32, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": "S \u03b8 (x) = \u03b8 \u2022 \u03c6(x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": ", where \u03b8 represents the weight for each feature and \u03c6(x) denotes the feature values of each instance x. The features are calculated between the representation of X S instances and X T . We use term distribution as the representation of the instances. We use the same similarity and diversity measures as Ruder and Plank (2017) . The weights \u03b8 are learned through BO by taking into account the performance on the validation set when selecting a particular subset of X S . The score S is computed for each x in X S , and then the top m examples are selected for training the M T T model. The loss value L from the M T T in the validation set is used by BO as a feedback to select the next points for \u03b8.", "cite_spans": [ { "start": 305, "end": 327, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Data Selection Framework", "sec_num": "2" }, { "text": "We systematically investigate how data selection is effective when applied on both the STDD and DTDD scenarios. We address two semantic sequence labeling tasks: Named Entity Recognition (NER) and slot tagging (ST).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For NER we use the OntoNotes 5.0 (Pradhan et al., 2012) dataset, which consists of several sections: newswire (NW), talkshows broadcast (BC), telephone conversation (TC), news broadcast (BN), articles from web sources (WB), and articles from magazines (MZ). We use different OntoNotes sections as different domains in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "As for ST we use three datasets: ATIS (Price, 1990), MIT-R, and MIT-M (Liu et al., 2013) , that are widely used as benchmarks for spoken language understanding. Each dataset contains utterances annotated with domain-specific slot labels, which are typically more fine-grained than NER labels. For example, in the utterance \"show me all Delta flights from Milan to New York\", the bold words are tagged as airline name, fromloc, and toloc respectively. The overall statistics of each dataset are shown in Table 1 .", "cite_spans": [ { "start": 70, "end": 88, "text": "(Liu et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 503, "end": 510, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We make use of the selection framework described in Section 2, and apply three Bayesian Optimization data selection (BODS) configurations, according to whether we use features both for similarity and diversity (DS sim,div ), similarity features only (DS sim ), or diversity features only (DS div ). We compare the three configurations with the following baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Configurations", "sec_num": "3.2" }, { "text": "\u2022 All source, which uses all the data from T S .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Configurations", "sec_num": "3.2" }, { "text": "\u2022 Random, which selects random data from T S . \u2022 DS map,full . We provide a manual mapping from NER labels to ST labels (Appendix A). A sentence from T S is selected is if all the NER occurrences have a mapping to a slot in T T . \u2022 DS map,partial . A sentence from T S is selected if at least one of the NER occurrences in the sentence has a mapping to a slot label in T T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Selection Configurations", "sec_num": "3.2" }, { "text": "We follow most of the hyperparameters 1 as recommended by Reimers and Gurevych (2018) . We train the model for T S and T T in an alternating fashion. We use early stopping on the dev. performance of T T . For the model performance evaluation, we calculate the F1-score using the standard CoNLL script 2 . For all experiments, we report the average F1 score results from 10 runs with different seeds. We follow Ruder and Plank (2017) for most configurations of the optimizer, and run 50 iterations. For both the STDD and DTDD scenarios, we select top 50% 3 examples from X S . For MTL we adapt the implementation from Reimers and Gurevych (2017) , extending the Bayesian Optimization data selection framework from Ruder and Plank (2017) to support MTL.", "cite_spans": [ { "start": 58, "end": 85, "text": "Reimers and Gurevych (2018)", "ref_id": "BIBREF14" }, { "start": 617, "end": 644, "text": "Reimers and Gurevych (2017)", "ref_id": "BIBREF14" }, { "start": 713, "end": 735, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "3.3" }, { "text": "T S = T T , D S = D T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "This is scenario is the same setup as Ruder and Plank (2017) , where we use the same tasks both for the source and the target task from different domains, except that we apply the data selection to a semantic sequence tagging task namely NER. In this scenario, we use NER both for the source and the target task. The target domain is one three OntoNotes sections namely NW (news), TC (telephone conversation) and BC (mixed of conversation and broadcast) while as source domain (D S ) we use all available sections in OntoNotes except the one used as the target domain. We only use 10% of training data for the target domain to simulate limited data settings. At the end of the data selection process, we select the top 50% sentences from D S using the best feature weights learned with the Bayesian Optimizer. Table 2 (a) compares the performance of the baselines with the selection-based approaches. In general, we do not observe clear advantages of data selection methods over the baselines, especially the all source data baseline. Using all source data yields the most competitive results almost in all cases. The only case in which DS surpasses the all source baseline is on the BC domain but only for a tiny gain. For NW and BC domains, some DS methods show clear advantages over the random baseline, but still worse than using all source data.", "cite_spans": [ { "start": 38, "end": 60, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 810, "end": 817, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "We want to see whether the distance between domains may characterize the performance of the data selection. For this purpose we quantify the domain similarity between each pair D S and D T with Jensen Shannon Divergence (JSD) (Lin, 1991) . We compute the JSD between the term distribution of D S and D T . The average JSD of each target task with respect to the source tasks are 0.80 (TC), 0.86 (NW), and 0.87 (BC) 4 . We observe that the higher the JSD is, the more beneficial is the data selection for the target task. BC, which has the highest JSD average, benefits the most from the data selection. On the other hand, TC with the lowest average similarity, has the largest gap between the baseline and the best DS methods (\u22121.7 F1 point).", "cite_spans": [ { "start": 226, "end": 237, "text": "(Lin, 1991)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "Based on our experiments, for the STDD scenario we observe that: 1. In most of the cases, DS methods are inferior to the all source baseline. Yet, it is clear that each domain has a different selection metric configuration that performs the best. This observation suggests that the hypothesis from Ruder and Plank (2017) more similar to other D S is a more suitable situation to get benefit from data selection.", "cite_spans": [ { "start": 298, "end": 320, "text": "Ruder and Plank (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "5 DTDD Scenario:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "T S = T T , D S = D T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "In this scenario we intend to observe whether data selection adds benefit to MTL. As in the STDD case, data selection is performed on the auxiliary task, where data is assumed to be abundant, and we only use a small portion of data for the target task. We use NER as the auxiliary task and ST as the target task. Prior work from Louvan and Magnini (2019) shows that NER is helpful for ST through MTL, although it is not clear whether adding data selection is beneficial. We follow the setup in Louvan and Magnini (2019), where OntoNotes NW is used as the auxiliary task, and the target task is one of the ST datasets with only 10% of available training data. Observing the results in Table 2 (b), in all the cases the baselines, namely all source data and random selection, perform better than MTL with DS methods. The selection methods based on manual label mapping, DS map , do not bring advantage over all source data. Therefore, given two distant D S and D T , selecting sentences based on the label mapping does not help. Moreover, as random selection gives good results as well for most scenarios, this indicates that data selection is not beneficial in our experimental setting that combines data selection and MTL.", "cite_spans": [ { "start": 329, "end": 354, "text": "Louvan and Magnini (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 684, "end": 691, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "Our findings and lessons learned for DTDD are the following: 1. We observe that MTL performs better than single-task learning (STL) for low-resource slot tagging, confirming the finding from Louvan and Magnini (2019) . However, adding data selection for MTL is ineffective in our DTDD experimental setup. We hypothesize that MTL learns good common feature representations across tasks, this way inherently helping the model to focus on relevant features even from noisy data in T S . In addition to that, due to data sparsity in limited training, using all the training data works better because the model may learn a better text representation (sentence encoder). Recent similar work from Schr\u00f6der and Biemann (2020) which uses information theoretic based for estimating the usefulness of an auxiliary task for MTL also found that for semantic sequence tagging tasks such as NER and argument mining, it is less clear when a particular dataset is useful as an auxiliary task. 2. Data selection typically produces selected sentences with concentrated similarity distribution 5 . Therefore, it is probably ineffective when the sentence similarity distribution between T S and T T is already concentrated on a very narrow range.", "cite_spans": [ { "start": 191, "end": 216, "text": "Louvan and Magnini (2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "STDD Scenario:", "sec_num": "4" }, { "text": "In this paper we investigated the benefit of data selection for transfer learning in several scenarios of increasing complexity. We apply an existing model-agnostic state of the art data selection framework, and carried on experiments on two semantic sequence tagging tasks, NER and Slot Tagging, and two transfer learning scenarios, STDD (Same Tasks Different Domains), and DTDD (Different Tasks Different Domains). For the STDD scenario, selection methods show potential when the target domain has the highest similarity to the source domains, based on Jensen Shannon Divergence. As for the DTDD scenario in which we use related tasks (NER and ST) from distant domains (news and conversational domains), using selection does not bring advantage over using all the source data. A possible cause is that, because of data sparsity on the target task, it is only by injecting more source data that we can improve the model. Finally, MTL does not benefit from data selection, as it may already effectively help the model to focus on relevant features even though in the presence of noisy data from distant domains. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Appendix C reports all used hyperparameters. 2 https://www.clips.uantwerpen.be/conll2000.3 We tune from 10% to 50% on the dev set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Complete pairwise JSD values are listed in Appendix B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We embed the sentence in source and target with InferSent(Conneau et al., 2017) and compute cosine similarity between the centroid of the target and each of the sentence in source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Domain adaptation via pseudo in-domain data selection", "authors": [ { "first": "Amittai", "middle": [], "last": "Axelrod", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "355--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 355-362, Edinburgh, Scotland, UK. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "440--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 440- 447, Prague, Czech Republic. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning", "authors": [ { "first": "Eric", "middle": [], "last": "Brochu", "suffix": "" }, { "first": "M", "middle": [], "last": "Vlad", "suffix": "" }, { "first": "Nando De", "middle": [], "last": "Cora", "suffix": "" }, { "first": "", "middle": [], "last": "Freitas", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1012.2599" ] }, "num": null, "urls": [], "raw_text": "Eric Brochu, Vlad M Cora, and Nando De Freitas. 2010. A tutorial on bayesian optimization of ex- pensive cost functions, with application to active user modeling and hierarchical reinforcement learn- ing. arXiv preprint arXiv:1012.2599.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multi-task learning for sequence tagging: An empirical study", "authors": [ { "first": "Soravit", "middle": [], "last": "Changpinyo", "suffix": "" }, { "first": "Hexiang", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sha", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2965--2977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soravit Changpinyo, Hexiang Hu, and Fei Sha. 2018. Multi-task learning for sequence tagging: An em- pirical study. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2965-2977, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Divergence measures based on the shannon entropy", "authors": [ { "first": "Jianhua", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1991, "venue": "IEEE Trans. Information Theory", "volume": "37", "issue": "", "pages": "145--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianhua Lin. 1991. Divergence measures based on the shannon entropy. IEEE Trans. Information Theory, 37:145-151.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Asgard: A Portable Architecture for Multilingual Dialogue Systems", "authors": [ { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Cyphers", "suffix": "" }, { "first": "Jim", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2013, "venue": "Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on", "volume": "", "issue": "", "pages": "8386--8390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A Portable Architec- ture for Multilingual Dialogue Systems. In Acous- tics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8386- 8390. IEEE.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Reinforced training data selection for domain adaptation", "authors": [ { "first": "Miaofeng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Hongbin", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1957--1968", "other_ids": { "DOI": [ "10.18653/v1/P19-1189" ] }, "num": null, "urls": [], "raw_text": "Miaofeng Liu, Yan Song, Hongbin Zou, and Tong Zhang. 2019. Reinforced training data selection for domain adaptation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 1957-1968, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Leveraging non-conversational tasks for low resource slot filling: Does it help?", "authors": [ { "first": "Samuel", "middle": [], "last": "Louvan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue", "volume": "", "issue": "", "pages": "85--91", "other_ids": { "DOI": [ "10.18653/v1/W19-5911" ] }, "num": null, "urls": [], "raw_text": "Samuel Louvan and Bernardo Magnini. 2019. Lever- aging non-conversational tasks for low resource slot filling: Does it help? In Proceedings of the 20th An- nual SIGdial Meeting on Discourse and Dialogue, pages 85-91, Stockholm, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Judicious selection of training data in assisting language for multilingual neural NER", "authors": [ { "first": "Rudra", "middle": [], "last": "Murthy", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Kunchukuttan", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "401--406", "other_ids": { "DOI": [ "10.18653/v1/P18-2064" ] }, "num": null, "urls": [], "raw_text": "Rudra Murthy, Anoop Kunchukuttan, and Pushpak Bhattacharyya. 2018. Judicious selection of train- ing data in assisting language for multilingual neural NER. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 401-406, Melbourne, Australia. Association for Computational Linguis- tics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "412--418", "other_ids": { "DOI": [ "10.18653/v1/P16-2067" ] }, "num": null, "urls": [], "raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidi- rectional long short-term memory models and auxil- iary loss. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412-418, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Uryupina", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", "volume": "", "issue": "", "pages": "1--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Evaluation of Spoken Language Systems: The ATIS Domain", "authors": [ { "first": "J", "middle": [], "last": "Patti", "suffix": "" }, { "first": "", "middle": [], "last": "Price", "suffix": "" } ], "year": 1990, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patti J Price. 1990. Evaluation of Spoken Language Systems: The ATIS Domain. In Speech and Natu- ral Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "338--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Perfor- mance Study of LSTM-networks for Sequence Tag- ging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 338-348, Copenhagen, Denmark. Nils Reimers and Iryna Gurevych. 2018. Why Com- paring Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Ap- proaches. CoRR, abs/1803.09578.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Domain adaptation using domain similarity-and domain complexity-based instance selection for cross-domain sentiment analysis", "authors": [ { "first": "Robert", "middle": [], "last": "Remus", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE 12th international conference on data mining workshops", "volume": "", "issue": "", "pages": "717--723", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Remus. 2012. Domain adaptation using domain similarity-and domain complexity-based instance se- lection for cross-domain sentiment analysis. In 2012 IEEE 12th international conference on data mining workshops, pages 717-723. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "To transfer or not to transfer", "authors": [ { "first": "Michael", "middle": [ "T" ], "last": "Rosenstein", "suffix": "" }, { "first": "Zvika", "middle": [], "last": "Marx", "suffix": "" }, { "first": "Leslie", "middle": [ "Pack" ], "last": "Kaelbling", "suffix": "" }, { "first": "Thomas", "middle": [ "G" ], "last": "Dietterich", "suffix": "" } ], "year": 2005, "venue": "NIPS'05 Workshop, Inductive Transfer: 10 Years Later", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael T. Rosenstein, Zvika Marx, Leslie Pack Kael- bling, and Thomas G. Dietterich. 2005. To transfer or not to transfer. In In NIPS'05 Workshop, Induc- tive Transfer: 10 Years Later.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning to select data for transfer learning with Bayesian optimization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "372--382", "other_ids": { "DOI": [ "10.18653/v1/D17-1038" ] }, "num": null, "urls": [], "raw_text": "Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian opti- mization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372-382, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Estimating the influence of auxiliary tasks for multi-task learning of sequence tagging tasks", "authors": [ { "first": "Fynn", "middle": [], "last": "Schr\u00f6der", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2971--2985", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.268" ] }, "num": null, "urls": [], "raw_text": "Fynn Schr\u00f6der and Chris Biemann. 2020. Estimating the influence of auxiliary tasks for multi-task learn- ing of sequence tagging tasks. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 2971-2985, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-task learning for argumentation mining in low-resource settings", "authors": [ { "first": "Claudia", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Kahse", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "35--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-task learning for argumentation mining in low-resource settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 35-41.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Data point selection for crosslanguage adaptation of dependency parsers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "682--686", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 682-686, Portland, Ore- gon, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deep multitask learning with low level tasks supervised at lower layers", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "231--235", "other_ids": { "DOI": [ "10.18653/v1/P16-2038" ] }, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi- task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231-235, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Improve neural entity recognition via multi-task data selection and constrained decoding", "authors": [ { "first": "Huasha", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qiong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "346--351", "other_ids": { "DOI": [ "10.18653/v1/N18-2056" ] }, "num": null, "urls": [], "raw_text": "Huasha Zhao, Yi Yang, Qiong Zhang, and Luo Si. 2018. Improve neural entity recognition via multi-task data selection and constrained decoding. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 346-351, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "content": "", "html": null, "text": "Statistics about the datasets used in the experiments. The language of the datasets is English.", "type_str": "table", "num": null }, "TABREF2": { "content": "
MethodATISMIT-RMIT-M
i.e., different tasks or even different domains demand a different notion of selection metric, is also applicable to semantic sequence tagging tasks such as NER. biLSTM-CRF STL 85.460.25 63.990.77 76.390.57 Baseline (MTL) All source 90.050.34 69.280.40 81.280.23 Random 89.930.26 69.540.35 81.350.31 DS map,full 89.970.25 68.820.50 79.270.36 DS map,partial 89.850.29 69.240.40 80.760.30 MTL+BODS DS sim,div 89.780.39 69.290.37 81.070.29 DSsim 89.830.31 69.250.41 81.170.25 DS div 89.950.41 69.090.24 81.100.28 2. Method TC NW BC Baseline All source 63.174.75 79.08 \u2020 0.42 73.42 2.13 Random 62.02 4.47 77.930.54 71.392.12 BODS DS sim,div 61.714.57 76.990.40 72.601.14 DSsim 61.453.80 78.300.41 73.441.12 DS div 61.653.77 78.32 0.53 71.891.53 (a) STDD (b) DTDD
", "html": null, "text": "The gap between the best DS method and the baseline for each D T can be characterized from the average JSD similarity to its D S . Being", "type_str": "table", "num": null }, "TABREF3": { "content": "", "html": null, "text": "Average F1-score and standard deviation on the test set. \u2020 indicates significant differences (p < 0.05) between the best BODS approach and the best baseline.", "type_str": "table", "num": null }, "TABREF5": { "content": "
MIT Movie SlotOntoNotes La-
bel
CHARACTER, ACTOR,PER
DIRECTOR
YEARDATE
PLOT, RATING, TITLE,O
REVIEW, SONG, RAT-
INGS AVERAGE, GENRE,
TRAILER
", "html": null, "text": "Label Mapping from ATIS to OntoNotes.", "type_str": "table", "num": null }, "TABREF6": { "content": "
21
", "html": null, "text": "Label Mapping from MIT Movie to OntoNotes.", "type_str": "table", "num": null }, "TABREF7": { "content": "
C Hyperparameters
HyperparameterValue
LSTM cell size100
Dropout0.5
Word embedding dimension300
Character embedding dimension 100
Mini-batch size128
Clip norm1
OptimizerAdam
Number of epoch20
Early stopping10
", "html": null, "text": "Domain Similarity (JSD) for each D T and D S", "type_str": "table", "num": null }, "TABREF8": { "content": "
ParameterAdopted value
Surrogate modelGaussianProcesses
with MCMC sampling
Acquisition functionExpected Logarithmic
Improvement
Number of initial evaluation3
points
Search space upper bound1
Search space lower bound-1
Number of iterations50
", "html": null, "text": "Neural model hyperparameters", "type_str": "table", "num": null }, "TABREF9": { "content": "", "html": null, "text": "Parameters used by the Bayesian Optimizer.", "type_str": "table", "num": null } } } }