{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:30.732114Z" }, "title": "Uncertainty and Traffic-Aware Active Learning for Semantic Parsing", "authors": [ { "first": "Priyanka", "middle": [], "last": "Sen", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Collecting training data for semantic parsing is a time-consuming and expensive task. As a result, there is growing interest in industry to reduce the number of annotations required to train a semantic parser, both to cut down on costs and to limit customer data handled by annotators. In this paper, we propose uncertainty and traffic-aware active learning, a novel active learning method that uses model confidence and utterance frequencies from customer traffic to select utterances for annotation. We show that our method significantly outperforms baselines on an internal customer dataset and the Facebook Task Oriented Parsing (TOP) dataset. On our internal dataset, our method achieves the same accuracy as random sampling with 2,000 fewer annotations.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Collecting training data for semantic parsing is a time-consuming and expensive task. As a result, there is growing interest in industry to reduce the number of annotations required to train a semantic parser, both to cut down on costs and to limit customer data handled by annotators. In this paper, we propose uncertainty and traffic-aware active learning, a novel active learning method that uses model confidence and utterance frequencies from customer traffic to select utterances for annotation. We show that our method significantly outperforms baselines on an internal customer dataset and the Facebook Task Oriented Parsing (TOP) dataset. On our internal dataset, our method achieves the same accuracy as random sampling with 2,000 fewer annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic parsing is the task of mapping natural language to a machine-executable meaning representation. Supervised semantic parsing models are trained on corpora of natural language utterances with annotated meaning representations. Collecting these annotations is an expensive manual process, usually requiring expert annotators who are familiar with both the domain of utterances and the target meaning representation language (e.g. SQL).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Active learning is a method for collecting training data when annotating is difficult or budgets are limited (Settles, 2009) . In active learning, an algorithm selects examples from an unlabeled set that are predicted to be more useful for the model if labeled. These examples are annotated and the model is retrained in an iterative process. The goal of an active learner is to reach higher performance faster than a random sampling baseline.", "cite_spans": [ { "start": 109, "end": 124, "text": "(Settles, 2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose uncertainty and trafficaware active learning, a simple yet effective method to improve a semantic parser. In our setup, we assume access to a set of initially annotated utterances and a large set of unlabeled utterances from customer traffic. We show that by using a combination of uncertainty and utterance frequency from traffic, we can achieve significantly higher performance than baselines on both an internal customer dataset and on the Facebook Task Oriented Parsing (TOP) dataset (Gupta et al., 2018) .", "cite_spans": [ { "start": 514, "end": 534, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Active learning has been applied to various NLP tasks (Zhou et al., 2010; Li et al., 2012; Shen et al., 2017; Peshterliev et al., 2019; . Duong et al. (2018) presented one of the first works on active learning for deep semantic parsing and found that selecting low-confidence examples outperformed random examples on two datasets but failed on a third. Koshorek et al. (2019) experimented with learning to actively-learn for semantic parsing, a method where the active learner is a learned model, but failed to see better performance than random sampling. Ni et al. (2020) proposed a framework where a weakly trained semantic parser was allowed to actively select examples for extra supervision. The authors found that selecting the least confident of the incorrect examples led to the best performance. Incorrect examples were identified by executing the predicted query and comparing the predicted answer with an expected answer. In this paper, we experiment with using uncertainty and utterance frequencies from customer traffic, a feature often found in industry logs.", "cite_spans": [ { "start": 54, "end": 73, "text": "(Zhou et al., 2010;", "ref_id": "BIBREF18" }, { "start": 74, "end": 90, "text": "Li et al., 2012;", "ref_id": "BIBREF8" }, { "start": 91, "end": 109, "text": "Shen et al., 2017;", "ref_id": "BIBREF16" }, { "start": 110, "end": 135, "text": "Peshterliev et al., 2019;", "ref_id": "BIBREF12" }, { "start": 138, "end": 157, "text": "Duong et al. (2018)", "ref_id": "BIBREF3" }, { "start": 353, "end": 375, "text": "Koshorek et al. (2019)", "ref_id": "BIBREF6" }, { "start": 556, "end": 572, "text": "Ni et al. (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose uncertainty and traffic-aware active learning for semantic parsing. Our method is inspired by Mehrotra and Yilmaz (2015) , who presented an active learning method for ranking al-gorithms which selects examples that are both informative to the model and representative of the dataset. The authors found that including a representativeness measure helped offset the tendency of informativeness measures to select outliers. In their paper, the authors measured informativeness as permutation probability based on a committee of ranking models, so a query where the most certain committee member had the least confidence was considered more informative. For representativeness, the authors used an LDA model to create a feature vector for each query. If a query's feature vector had higher cosine similarity to the average feature vector of all queries, the query was considered more representative.", "cite_spans": [ { "start": 105, "end": 131, "text": "Mehrotra and Yilmaz (2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Uncertainty and Traffic-Aware Active Learning", "sec_num": "3" }, { "text": "In our method, we also use informativeness and representativeness, but we introduce new ways to measure both that can be applied to semantic parsing tasks. For each utterance u in a set of unlabeled utterances U, we calculate f(u), a sampling weight associated with u, as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty and Traffic-Aware Active Learning", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (u) = \u03b2 \u03c6(u) u\u2208U \u03c6(u) + (1 \u2212 \u03b2) \u03c8(u) u\u2208U \u03c8(u)", "eq_num": "(1)" } ], "section": "Uncertainty and Traffic-Aware Active Learning", "sec_num": "3" }, { "text": "where \u03c6(u) is the representativeness and \u03c8(u) is the informativeness of u. We measure \u03c6(u) as the utterance frequency, calculated as the number of times the utterance u appeared during a given time window of traffic. We measure \u03c8(u) as 1 -our model's confidence on u. To calculate confidence, we use perplexity per word, which is the inverse probability of a model's output normalized by the number of words. We convert this perplexity into a confidence score by scaling it to a value between [0,1] using the function in Algorithm 1. The threshold is set to 0.9, which was fine-tuned based on the model's accuracy in production. In this function, confidence approaches 1 as perplexity approaches 0, confidence is 0.5 when perplexity is the threshold, and confidence approaches 0 as perplexity approaches infinity. While this scaled perplexity is not an exact measure of confidence, we found that it was effective in our experiments. Both \u03c6(u) and \u03c8(u) are normalized by the sum of all values of \u03c6(u) and \u03c8(u). We use f(u) as a weight on each utterance when sampling. Utterances that maximize f(u) by having higher frequencies and lower confidences are more likely to be selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty and Traffic-Aware Active Learning", "sec_num": "3" }, { "text": "The \u03b2 is a fine-tunable term that weighs the utterance frequency against the confidence. We man-Algorithm 1: Perplexity to confidence p \u2190 perplexity if p > threshold: then return 1 / (2 + (100 * (p -threshold))); else return 1 -0.5 * (p / threshold); end ually fine-tuned \u03b2 by training 9 models with different values ranging from 0.1 to 0.9 and compared performance in terms of exact-match accuracy. We found that a \u03b2 of 0.4 performed the best on our internal dataset and a \u03b2 of 0.5 performed the best on TOP, and so we use these \u03b2 values in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty and Traffic-Aware Active Learning", "sec_num": "3" }, { "text": "The semantic parsing model we use to evaluate our method is a reimplementation of the sequenceto-sequence model with pointer generator network proposed by Rongali et al. (2020) , which achieved state-of-the-art performance on Facebook TOP (Gupta et al., 2018) . We use a BERT-Base model (Devlin et al., 2019) as the encoder and a transformer based on Vaswani et al. (2017) as the decoder. The encoder converts a sequence of words into a sequence of embeddings. Then at each time step, the decoder outputs either a symbol from the output vocabulary or a pointer to an input token. A final softmax layer provides a probability distribution over all actions, and beam search maximizes the output sequence probability.", "cite_spans": [ { "start": 155, "end": 176, "text": "Rongali et al. (2020)", "ref_id": "BIBREF13" }, { "start": 239, "end": 259, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF4" }, { "start": 287, "end": 308, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" }, { "start": 351, "end": 372, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing Model", "sec_num": "3.1" }, { "text": "We compare our method to the following baselines. RANDOM: Our random baseline randomly samples utterances for annotation. TRAFFIC-AWARE: Our traffic-aware baseline uses utterance frequencies as a weight on each utterance, prioritizing utterances asked more often. In datasets containing duplicates, this is equivalent to random sampling. CLUSTERING: In our clustering baseline (Kang et al., 2004; Ni et al., 2020) , we compute a RoBERTa (Liu et al., 2019) embedding using sentence-transformers 1 for each utterance. We cluster the embeddings with Internal TOP Train 10,000 500 Dev 2,000 4,032 Test 5,000 8,241 Unlabeled 100,000 13,680 Src Vocab 30,160 11,873 Tgt Vocab 5,400 116 k-means and set the number of clusters to the round's budget (i.e. if our budget is 500 utterances, we create 500 clusters). Then we randomly sample 1 example per cluster.", "cite_spans": [ { "start": 377, "end": 396, "text": "(Kang et al., 2004;", "ref_id": "BIBREF5" }, { "start": 397, "end": 413, "text": "Ni et al., 2020)", "ref_id": "BIBREF11" }, { "start": 437, "end": 455, "text": "(Liu et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 560, "end": 661, "text": "Train 10,000 500 Dev 2,000 4,032 Test 5,000 8,241 Unlabeled 100,000 13,680 Src Vocab 30,160", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Compared Approaches", "sec_num": "3.2" }, { "text": "LEAST CONFIDENCE: Our least confidence baseline (Lewis and Catlett, 1994; Culotta and McCallum, 2005) selects utterances with the lowest model confidence.", "cite_spans": [ { "start": 48, "end": 73, "text": "(Lewis and Catlett, 1994;", "ref_id": "BIBREF7" }, { "start": 74, "end": 101, "text": "Culotta and McCallum, 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Compared Approaches", "sec_num": "3.2" }, { "text": "MARGIN OF CONFIDENCE: Our margin of confidence baseline (Settles and Craven, 2008) calculates the difference in confidence between the top two predictions in an n-best list. Large differences between the top two predictions indicate there is a clear top prediction, while small differences indicate greater model uncertainty. We select the examples with the smallest difference in confidence.", "cite_spans": [ { "start": 56, "end": 82, "text": "(Settles and Craven, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Compared Approaches", "sec_num": "3.2" }, { "text": "A less deterministic version of Least Confidence. We use 1 -model confidence as a weight on each utterance, prioritizing utterances with low confidence. UNCERTAINTY + CORRECTNESS: Our uncertainty + correctness baseline (Ni et al., 2020) selects the most uncertain of the predictions that are incorrect. In practice, there are several ways to identify an incorrect prediction, such as checking if 1) a query fails to execute, 2) a query executes but fails to answer, or 3) a query executes but does not return the expected answer. In our experimental setup, we use a more favorable setting by checking the prediction against the expected representation.", "cite_spans": [ { "start": 219, "end": 236, "text": "(Ni et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "UNCERTAINTY-AWARE:", "sec_num": null }, { "text": "We run experiments on both an internal customer dataset and the Facebook Task Oriented Parsing Internal what is the capital of france, is the capital of(@ptr5) TOP Any accidents along Culver, [IN:GET INFO TRAFFIC @ptr0 @ptr1 @ptr2 [SL:LOCATION @ptr3]] Table 2 : Examples from the datasets. @ptrs are pointers to a source token. In the first example @ptr5 refers to the 5th token in the source, \"france\".", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 94, "text": "Task Oriented Parsing", "ref_id": null }, { "start": 252, "end": 259, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "4" }, { "text": "(TOP) dataset (Gupta et al., 2018) . Details and examples are shown in Tables 1 and 2. Our internal dataset contains open-domain factual questions asked by customers to a commercial voice assistant. The utterances are anonymized and labeled with a meaning representation by an internal high-precision rule-based system. We also calculate a count for each utterance based on how often the utterance was asked in a given period of time. This dataset contains only unique utterances, which prevents selecting the same utterance multiple times for annotation.", "cite_spans": [ { "start": 14, "end": 34, "text": "(Gupta et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 71, "end": 86, "text": "Tables 1 and 2.", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Datasets", "sec_num": "4" }, { "text": "To our knowledge, there is no public semantic parsing dataset with question frequencies, and so we use a modified version of TOP. TOP is a semantic parsing dataset of 45k crowdsourced queries about navigation and public events. These queries are manually labeled with a meaning representation. In order to create a measure of representativeness, we assume that utterances with an exact-matched meaning representation are semantically similar. Utterances with meaning representations that appear more often are considered more representative. We keep one utterance per exact-matched meaning representation, and use the counts as a measure of how popular this type of question is among users. This is done for experimental purposes. In a real setting without the labels, we could use alternate measures of semantic similarity to identify more popular questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4" }, { "text": "For controlled experimentation, we simulate active learning by treating a subset of our data as unlabeled. When an unlabeled example is selected, we reveal the label and add it to the training set. All our experiments are run on an Nvidia Tesla v100 16GB GPU and the results are reported as exact match accuracy. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "For our internal dataset, we start with a base training set of 10,000 utterances and set an annotation budget of 5,000 utterances. In each round, we sample 500 utterances from the unlabeled set, append them with their labels to the training set, and fully retrain the model. We repeat this for 10 rounds and report results as an average over 5 runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Internal Dataset", "sec_num": "5.1" }, { "text": "The results are shown in terms of relative change in exact-match accuracy in Figure 1a . Our method initially has similar performance to uncertaintybased baselines, but after Round 4, our method outperforms all the baselines. Table 3 has results of paired t-tests comparing our method to each baseline. All the p-values are <0.05, showing statistical significance. In particular, our method outperforms random sampling. The examples picked by the first 6 rounds of uncertainty and traffic-aware sampling (accuracy \u22067.0% at round 6) are as valuable as the examples picked by all 10 rounds of random sampling (accuracy \u22066.9% at round 10), saving on the cost of 2,000 annotations.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 86, "text": "Figure 1a", "ref_id": "FIGREF0" }, { "start": 226, "end": 233, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Internal Dataset", "sec_num": "5.1" }, { "text": "To better understand these results, we inspected examples selected by each method. We found that although the traffic-aware method picked popular utterances, annotating many similar questions had limited gains over time. On the other hand, uncertainty-based approaches picked more diverse examples, but since customer datasets can be noisy, they were prone to picking outliers that were not as useful to the model when annotated. By combining frequency with uncertainty, our method was able to prioritize popular but under-represented examples, which were both interesting for customers and interesting for the model, and this gave us the best performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Internal Dataset", "sec_num": "5.1" }, { "text": "Internal ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": null }, { "text": "We next ran experiments on TOP. Given that TOP is a smaller and simpler dataset (e.g. target vocab of 116 vs. 5,400), we start with a smaller base training set of 500 examples and set an annotation budget of 500 examples. In each round, we sample 100 examples from the unlabeled set, append them with their labels to the training set, and fully retrain the model. We see the effect of our method as early as Round 1, so we stop after 5 rounds and report results as an average over 5 runs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TOP", "sec_num": "5.2" }, { "text": "The results are shown as exact-match accuracy in Figure 1b and the p-values from paired t-tests are in Table 3 . These results again show that our method significantly outperforms the baselines. Even though the traffic weights in TOP are not from customer traffic, traffic-aware sampling performs almost as well as our method. This suggests that MRL frequency is a helpful measure for this test set. We also observe that some of our uncertainty-based baselines perform worse than random sampling, in contrast to our results on the internal dataset. We hypothesize this could be because uncertainty is a less useful signal from models built with smaller training sets (TOP: 500-1,000 training examples vs. Internal: 10,000-15,000 training examples) or because low confidence examples were less useful for TOP's test set. Uncertainty still provides some advantage, however, as the combination with MRL frequency leads to the best performance.", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 58, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 103, "end": 110, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "TOP", "sec_num": "5.2" }, { "text": "In this work, we present uncertainty and trafficaware active learning, a method that uses model confidence and traffic frequency to improve a semantic parsing model. We show that our method significantly outperforms baselines on both an internal dataset and TOP. Our method achieves the same precision as random sampling with 2,000 fewer annotations on our internal dataset. Based on our results, we present our method as a way to improve semantic parsers while reducing annotation costs and limiting customer data shown to annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/UKPLab/ sentence-transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Active Learning for Domain Classification in a Commercial Spoken Personal Assistant", "authors": [ { "first": "C", "middle": [], "last": "Xi", "suffix": "" }, { "first": "Adithya", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Justine", "middle": [ "T" ], "last": "Sagar", "suffix": "" }, { "first": "Tony", "middle": [ "Y" ], "last": "Kao", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Li", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Pulman", "suffix": "" }, { "first": "Jason", "middle": [ "D" ], "last": "Garg", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "1478--1482", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-1315" ] }, "num": null, "urls": [], "raw_text": "Xi C. Chen, Adithya Sagar, Justine T. Kao, Tony Y. Li, Christopher Klein, Stephen Pulman, Ashish Garg, and Jason D. Williams. 2019. Active Learning for Domain Classification in a Commercial Spoken Per- sonal Assistant. In Proc. Interspeech 2019, pages 1478-1482.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Reducing labeling effort for structured prediction tasks", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2005, "venue": "AAAI", "volume": "5", "issue": "", "pages": "746--751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta and Andrew McCallum. 2005. Reduc- ing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746-751.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Active learning for deep semantic parsing", "authors": [ { "first": "Long", "middle": [], "last": "Duong", "suffix": "" }, { "first": "Hadi", "middle": [], "last": "Afshar", "suffix": "" }, { "first": "Dominique", "middle": [], "last": "Estival", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Pink", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "43--48", "other_ids": { "DOI": [ "10.18653/v1/P18-2008" ] }, "num": null, "urls": [], "raw_text": "Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2018. Ac- tive learning for deep semantic parsing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 43-48, Melbourne, Australia. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semantic parsing for task oriented dialog using hierarchical representations", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2787--2792", "other_ids": { "DOI": [ "10.18653/v1/D18-1300" ] }, "num": null, "urls": [], "raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787-2792, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Using cluster-based sampling to select initial training set for active learning in text classification", "authors": [ { "first": "Jaeho", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Hyuk-Chul", "middle": [], "last": "Kwang Ryel Ryu", "suffix": "" }, { "first": "", "middle": [], "last": "Kwon", "suffix": "" } ], "year": 2004, "venue": "Pacific-Asia conference on knowledge discovery and data mining", "volume": "", "issue": "", "pages": "384--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaeho Kang, Kwang Ryel Ryu, and Hyuk-Chul Kwon. 2004. Using cluster-based sampling to select initial training set for active learning in text classification. In Pacific-Asia conference on knowledge discovery and data mining, pages 384-388. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the limits of learning to actively learn semantic representations", "authors": [ { "first": "Omri", "middle": [], "last": "Koshorek", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Stanovsky", "suffix": "" }, { "first": "Yichu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "452--462", "other_ids": { "DOI": [ "10.18653/v1/K19-1042" ] }, "num": null, "urls": [], "raw_text": "Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, and Jonathan Berant. 2019. On the limits of learning to actively learn semantic rep- resentations. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 452-462, Hong Kong, China. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Heterogeneous uncertainty sampling for supervised learning", "authors": [ { "first": "D", "middle": [], "last": "David", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "", "middle": [], "last": "Catlett", "suffix": "" } ], "year": 1994, "venue": "Machine Learning Proceedings", "volume": "", "issue": "", "pages": "148--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "David D Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Ma- chine Learning Proceedings 1994, pages 148-156. Elsevier.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Active learning for imbalanced sentiment classification", "authors": [ { "first": "Shoushan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shengfeng", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "139--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shoushan Li, Shengfeng Ju, Guodong Zhou, and Xiao- jun Li. 2012. Active learning for imbalanced senti- ment classification. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 139-148, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Representative & informative query selection for learning to rank using submodular functions", "authors": [ { "first": "Rishabh", "middle": [], "last": "Mehrotra", "suffix": "" }, { "first": "Emine", "middle": [], "last": "Yilmaz", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15", "volume": "", "issue": "", "pages": "545--554", "other_ids": { "DOI": [ "10.1145/2766462.2767753" ] }, "num": null, "urls": [], "raw_text": "Rishabh Mehrotra and Emine Yilmaz. 2015. Represen- tative & informative query selection for learning to rank using submodular functions. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15, page 545-554, New York, NY, USA. As- sociation for Computing Machinery.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Merging weak and active supervision for semantic parsing", "authors": [ { "first": "Ansong", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2020, "venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ansong Ni, Pengcheng Yin, and Graham Neubig. 2020. Merging weak and active supervision for semantic parsing. In Thirty-Fourth AAAI Conference on Arti- ficial Intelligence (AAAI), New York, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Active learning for new domains in natural language understanding", "authors": [ { "first": "Stanislav", "middle": [], "last": "Peshterliev", "suffix": "" }, { "first": "John", "middle": [], "last": "Kearney", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "90--96", "other_ids": { "DOI": [ "10.18653/v1/N19-2012" ] }, "num": null, "urls": [], "raw_text": "Stanislav Peshterliev, John Kearney, Abhyuday Jagan- natha, Imre Kiss, and Spyros Matsoukas. 2019. Ac- tive learning for new domains in natural language un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 90- 96, Minneapolis, Minnesota. Association for Com- putational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Don't parse, generate! A sequence to sequence architecture for task-oriented semantic parsing", "authors": [ { "first": "Subendhu", "middle": [], "last": "Rongali", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Soldaini", "suffix": "" }, { "first": "Emilio", "middle": [], "last": "Monti", "suffix": "" }, { "first": "Wael", "middle": [], "last": "Hamza", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The Web Conference 2020", "volume": "", "issue": "", "pages": "2962--2968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. 2020. Don't parse, generate! A se- quence to sequence architecture for task-oriented se- mantic parsing. In Proceedings of The Web Confer- ence 2020, pages 2962-2968.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Active learning literature survey", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An analysis of active learning strategies for sequence labeling tasks", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Craven", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1070--1079", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles and Mark Craven. 2008. An analysis of ac- tive learning strategies for sequence labeling tasks. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 1070-1079.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep active learning for named entity recognition", "authors": [ { "first": "Yanyao", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Hyokun", "middle": [], "last": "Yun", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Lipton", "suffix": "" }, { "first": "Yakov", "middle": [], "last": "Kronrod", "suffix": "" }, { "first": "Animashree", "middle": [], "last": "Anandkumar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "252--256", "other_ids": { "DOI": [ "10.18653/v1/W17-2630" ] }, "num": null, "urls": [], "raw_text": "Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. In Proceedings of the 2nd Workshop on Representa- tion Learning for NLP, pages 252-256, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Active deep networks for semi-supervised sentiment classification", "authors": [ { "first": "Shusen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Qingcai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2010, "venue": "COLING 2010: Posters", "volume": "", "issue": "", "pages": "1515--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shusen Zhou, Qingcai Chen, and Xiaolong Wang. 2010. Active deep networks for semi-supervised sentiment classification. In COLING 2010: Posters, pages 1515-1523, Beijing, China. COLING 2010 Organizing Committee.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Results of the experiments. Scores are calculated as exact-match accuracy. We only report relative change in accuracy for the internal dataset. The shaded regions represent the standard error for each point.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Details of the datasets. Train is the starting training set in our experiments. Unlabeled is the set from which additional training examples are sampled." }, "TABREF2": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Results of paired t-tests comparing our method to each baseline. p<.05 is considered significant" } } } }