{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:10.446431Z" }, "title": "Do Data-based Curricula Work?", "authors": [ { "first": "Maxim", "middle": [ "K" ], "last": "Surkov", "suffix": "", "affiliation": { "laboratory": "LEYA Lab", "institution": "", "location": { "settlement": "Yandex", "region": "Higher" } }, "email": "" }, { "first": "Vladislav", "middle": [ "D" ], "last": "Mosin", "suffix": "", "affiliation": { "laboratory": "LEYA Lab", "institution": "", "location": { "settlement": "Yandex", "region": "Higher" } }, "email": "" }, { "first": "Ivan", "middle": [ "P" ], "last": "Yamshchikov", "suffix": "", "affiliation": { "laboratory": "LEYA Lab", "institution": "", "location": { "settlement": "Yandex", "region": "Higher" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Current state-of-the-art NLP systems use large neural networks that require extensive computational resources for training. Inspired by human knowledge acquisition, researchers have proposed curriculum learning-sequencing tasks (task-based curricula) or ordering and sampling the datasets (data-based curricula) that facilitate training. This work investigates the benefits of data-based curriculum learning for large language models such as BERT and T5. We experiment with various curricula based on complexity measures and different sampling strategies. Extensive experiments on several NLP tasks show that curricula based on various complexity measures rarely have any benefits, while random sampling performs either as well or better than curricula.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Current state-of-the-art NLP systems use large neural networks that require extensive computational resources for training. Inspired by human knowledge acquisition, researchers have proposed curriculum learning-sequencing tasks (task-based curricula) or ordering and sampling the datasets (data-based curricula) that facilitate training. This work investigates the benefits of data-based curriculum learning for large language models such as BERT and T5. We experiment with various curricula based on complexity measures and different sampling strategies. Extensive experiments on several NLP tasks show that curricula based on various complexity measures rarely have any benefits, while random sampling performs either as well or better than curricula.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the last years state-of-art results in natural language processing (NLP) are often obtained with Transformer-like architectures based on the selfattention mechanism (Vaswani et al., 2017 ) such as BERT (Devlin et al., 2019) , GPT-3 (Brown et al., 2020) , T5 (Raffel et al., 2020) , which could have billions of parameters. Due to many parameters, these architectures require lots of time and hardware resources to be trained.", "cite_spans": [ { "start": 168, "end": 189, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF15" }, { "start": 205, "end": 226, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 229, "end": 255, "text": "GPT-3 (Brown et al., 2020)", "ref_id": null }, { "start": 261, "end": 282, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Curriculum learning (CL) is one of the popular methods to reduce training time and increase the resulting quality of the model. Inspired by the importance of adequately ordering information when teaching humans (Avrahami et al., 1997) , curriculum learning increases the difficulty of training samples shown to the model over time (Elman, 1993) . Previous studies have demonstrated that curriculum learning significantly impacts training time and quality in different machine learning domains, such as computer vision (Soviany, 2020) and reinforcement learning (Narvekar et al., 2020) . In NLP, some results hint that CL might be beneficial (Platanios et al., 2019; Xu et al., 2020; Kocmi and Bojar, 2017) ; however, these results are not as optimistic as in reinforcement learning setup.", "cite_spans": [ { "start": 211, "end": 234, "text": "(Avrahami et al., 1997)", "ref_id": "BIBREF1" }, { "start": 331, "end": 344, "text": "(Elman, 1993)", "ref_id": "BIBREF6" }, { "start": 518, "end": 533, "text": "(Soviany, 2020)", "ref_id": null }, { "start": 561, "end": 584, "text": "(Narvekar et al., 2020)", "ref_id": "BIBREF9" }, { "start": 641, "end": 665, "text": "(Platanios et al., 2019;", "ref_id": "BIBREF10" }, { "start": 666, "end": 682, "text": "Xu et al., 2020;", "ref_id": null }, { "start": 683, "end": 705, "text": "Kocmi and Bojar, 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We suggest dividing recent research in curriculum learning into two main categories: task-driven curriculum and data-driven curriculum. The idea of the task-driven curriculum was inspired by human behavior. First, the model learns how to solve a simple task, and then the difficulty is gradually increased. This type of curriculum proposed by Bengio et al. (2009) is considered to be classical, and a majority of curriculum-related results are obtained in this framework. Alternatively to the taskdriven curriculum, some curricula try to use some form of filtering or sorting of training data that could facilitate learning a model on a given task. We suggest calling these curricula data-driven and distinguishing them from the classical task-based approach.", "cite_spans": [ { "start": 343, "end": 363, "text": "Bengio et al. (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper attempts to understand when datadriven curriculum learning works for transformerbased language models. Generally, data-driven curriculum learning is organized in two steps: first, estimating the complexity for the elements that comprise the dataset; second, designing a sampling strategy, thus forming a curriculum. In the first part of the paper, we list potentially useful natural language processing complexity measures. The second part discusses possible sampling strategies that might apply to corresponding complexity measures. We run extensive experiments with different metrics and sampling strategies on three classes of NLP tasks: unsupervised learning with masked language modeling, text classification, and machine translation. Our experiments show that data-driven curriculum learning does not give quality increase or time reduction on all metric-sampling strategy setups and often makes results even worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first important part of the curriculum learning pipeline is measuring the complexity of samples for a given dataset. Texts could have a complex structure, and one can measure their complexity in different ways. A variety of heuristically motivated methods is accompanied by several metrics based on specific aspects of information theory. For a review of heuristic text complexity measures such as length of TF-IDF (Aizawa, 2003) we address the reader to Appendix A. In this paper, we also explore the metrics initially proposed by Ay et al. (2006) to measure the complexity of finite systems and try to see if one could apply these metrics to NLP tasks. Ay et al. (2006) observes that for finite systems, a set of parts impacts the complexity of the system as well as inter-dependencies of the parts. In the context of NLP, this means that text is more than just a bag of words. The authors propose four different metrics to estimate the complexity of a system. However, one of these metrics maximizes on single-letter texts, such as \"Aaaaaaaaa,\" while the second was created to measure cyclic sequences and does not apply to texts. Thus we experiment with two other metrics, namely, Tononi, Sporns, and Edelman (TSE) (Tononi et al., 1994) and excess entropy (EE), and adapt them to the complexity of texts. For the calculation of TSE and EE for NLP we address the reader to Appendix B.", "cite_spans": [ { "start": 419, "end": 433, "text": "(Aizawa, 2003)", "ref_id": "BIBREF0" }, { "start": 536, "end": 552, "text": "Ay et al. (2006)", "ref_id": "BIBREF2" }, { "start": 659, "end": 675, "text": "Ay et al. (2006)", "ref_id": "BIBREF2" }, { "start": 1223, "end": 1244, "text": "(Tononi et al., 1994)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2" }, { "text": "The second important part of curriculum learning is the sampling strategy (or sampler) -the algorithm deciding which samples should be shown to the model at which moment. Let us observe existing curricula and suggest some new ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Competence-based. CB A competence-based curriculum, offered by Platanios et al. (2019) , uniformly samples data from increasing dataset's prefix. Competence is a function c(t), which defines the size of the dataset prefix.", "cite_spans": [ { "start": 63, "end": 86, "text": "Platanios et al. (2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "c(t) = min \uf8eb \uf8ed 1, t 1 \u2212 c 2 0 T + c 2 0 \uf8f6 \uf8f8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Where T -total number of steps, t -current step, c 0 -hyperparameter set to 0.01. Hyperbolic. HYP The main idea of this sampler is to increase average batch complexity through time. All samples are split by complexity into N sequential buckets with equal size. Training time is divided into N epochs and the probability of sampling the element from the j-th bucket on the i-th epoch is proportional to the distance between j and i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "P r i (j) = c |j \u2212 i| 0.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Where P r i (j) -probability to sample from j-th bucket on the i-th epoch, c -constant to guarantee that sum of all probabilities equals to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Difficulty-based. DB This sampler is a reversed version of the competence-based one. A difficulty-based sampler takes elements from a linearly decreasing suffix instead of sampling from a gradually increasing prefix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Sort-shuffle. SS All previously described samplers do not guarantee that the model would see each element in the training data. Sort-shuffle samples each element exactly once, randomly splitting the data into batches and sorting by average complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Sort-merge. SM Many complexity estimates correlate with the length of the text. The main idea of a sort-merge sampler is to remove this correlation and train the model on stable length distribution. This algorithm consists of four main steps: sort dataset by length; sequentially split into buckets; sort each bucket by a complexity metric; form i-th batch from i-th elements from each bucket. Like a sequential one, the sort-merge sampler shows each element to the model exactly once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "Equipped with the list of metrics and curriculum samplers, we can discuss our experimental results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Samplers", "sec_num": "3" }, { "text": "We perform our experiments on three NLP tasks: text classification, machine translation (NMT), and masked language modeling (MLM). Here we discuss the first task of classification in detail. The extensive results of the experiments are available in Appendix C. All the experiments are performed with the HuggingFace library (Wolf et al., 2020) , which provides the models with their setups, such as hyperparameters and tokenizers. We did not change default parameters in our experiment unless specifically stated otherwise. Thus, the dataset and the model specify every experiment. We use the base version of the BERT model (Devlin et al., 2019) for MLM and classification, and the small version of the T5 model (Raffel et al., 2020) for machine translation. Experiments were performed on BooksCorpus 1 dataset for MLM, Sentiment140 2 and Hyperpartisan News Detection 3 for classification, and WMT16-en-de 4 for machine translation. To estimate the curriculum's convergence speed, we calculate the average number of steps to reach a threshold that is 10% lower than the resulting saturation quality metric for every problem. Figure 1 summarizes the experiments with BERT for text classification. Neither different samplers nor complexity measures improve a BERT-based classifier's resulting accuracy. Figure 2 shows the results of MLM pretraining of BERT on BooksCorpus. Irrespective of sampling, the complexity measures have similar ranking in terms of their performance on MLM: length, likelihood, TSE, EE, TF-IDF, maximum word rank. Since sorted sampler takes length into account by design, it is not included in the corresponding plots. Data-based curricula show inferior results in comparison with the baseline. Table 1 shows the experiments with T5 model (Raffel et al., 2020) for machine translation and various curricula. We use the BLEU metric to estimate the quality of the resulting models. We calculate the average BLEU score over ten validations at saturation. Once again, curriculum learning does not give any notable benefits.", "cite_spans": [ { "start": 324, "end": 343, "text": "(Wolf et al., 2020)", "ref_id": null }, { "start": 624, "end": 645, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 712, "end": 733, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 1125, "end": 1133, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 1301, "end": 1309, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 1717, "end": 1724, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We try to interpret obtained results cautiously. Though Platanios et al. (2019) report that 1 https://huggingface.co/datasets/ bookcorpus 2 https://www.kaggle.com/kazanova/ sentiment140", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "3 https://huggingface.co/datasets/ hyperpartisan_news_detection 4 https://huggingface.co/datasets/wmt16 rameters, yet to the best of our capabilities, we address this issue in Appendix C.3. The results do not seem to depend on the learning rate, and once again, curriculum learning shows no benefits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "At this point, we can only conclusively say two things: (1) a deeper investigation of the underlying information theoretic principles that stand behind curriculum learning is badly needed; (2) until we better understand these principles, data-based curriculum learning is a gamble with very low odds to gain either speed or resulting performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "In this work, we ran extensive experiments with curriculum learning for transformer-based architectures on three NLP tasks: masked language modeling, text classification, and machine translation. We demonstrate that curricula do not help in the standard training setting and sometimes even worsen results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The publication was supported by the grant for research centers in the field of AI provided by the Analytical Center for the Government of the Russian Federation (ACRF) in accordance with the agreement on the provision of subsidies (identifier of the agreement 000000D730321P5Q0002) and the agreement with HSE University No. 70-2021-00139. This research was supported in part through computational resources of HPC facilities at HSE University (Kostenetskiy et al., 2021) 2018. An empirical exploration of curriculum learning for neural machine translation. arXiv preprint arXiv:1811.00739.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "7" }, { "text": "The first idea is to determine the complexity of the text as its length. Despite its simplicity, this method is used in different works (Platanios et al., 2019; Kocmi and Bojar, 2017) . The next family of approaches boils down to phonological, morphological, lexical, or syntactic metrics derived with some form of expert linguistic knowledge. However, van der Sluis and van den Broek (2010) used Wikipedia and Simple Wikipedia corpora to demonstrate that language-based metrics do not correlate with the common sense text complexity. The third class of methods treats text as a bag of words and builds metrics based on the frequency analysis. For example, every word gets a rank equal to its position in the dictionary sorted by the number of word appearances in a corpus. In this case, complexity may be measured as a maximum rank among the words in a bag (Kocmi and Bojar, 2017) . This metric is called max frequency rank. Another possible metric is called likelihood. The metric calculates the probability of the text under the assumption that all tokens are independent, just by multiplying probabilities of all tokens in the text (Platanios et al., 2019) . Another metric from this group is TF-IDF (Aizawa, 2003) , which is widely used in search systems. Finally, the last array of methods is based on using different neural network losses as a complexity measure of a sample.", "cite_spans": [ { "start": 136, "end": 160, "text": "(Platanios et al., 2019;", "ref_id": "BIBREF10" }, { "start": 161, "end": 183, "text": "Kocmi and Bojar, 2017)", "ref_id": "BIBREF7" }, { "start": 858, "end": 881, "text": "(Kocmi and Bojar, 2017)", "ref_id": "BIBREF7" }, { "start": 1136, "end": 1160, "text": "(Platanios et al., 2019)", "ref_id": "BIBREF10" }, { "start": 1204, "end": 1218, "text": "(Aizawa, 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "A Heuristic Approaches to Text Complexity", "sec_num": null }, { "text": "Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "X V = (X v1 , X v2 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": ". .) be a sequence of random variables from set V = (v1, v2, . . .), and A is a subset of V , then X A is a subsequence of X V with elements from A. Let's determine H(X A ) as entropy of sequence X A . However, texts consist of words or tokens, not random variables. We propose the following procedure of transforming texts into random variable sequences. For each token in position i we compute the percentage of texts with this token on the same position and replace the original token with binary distribution with a probability of one equal to the calculated percentage. After transforming text into a sequence of random variables, we can compute its entropy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "H(X V ) = H(X v1 ) + H(X v2 |X v1 ) + H(X v3 |X v2 , X v1 ) + . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "If one wants to apply this formula, one must compute entropy for many different conditional distributions while these distributions depend on the order of tokens in a text. First, direct application of the formula would overfit a specific text since all texts are different in a corpus. Second, such computation could not be carried out in a reasonable time. The limit context for conditional distributions to the nearest neighbors one obtains the following formula", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "H(X V ) = H(X v1 ) + #V i=2 H(X v i |X v i\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "Using this approximation for entropy one can compute excess entropy (EE) and the complexity measure Tononi, Sporns and Edelman (TSE), (Tononi et al., 1994) as they are formulated by Ay et al. (2006) ", "cite_spans": [ { "start": 134, "end": 155, "text": "(Tononi et al., 1994)", "ref_id": "BIBREF13" }, { "start": 182, "end": 198, "text": "Ay et al. (2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "EE(X V ) = v\u2208V H(X V \\v ) \u2212 (n \u2212 1)H(X V ), (1) T SE(X V ) = n\u22121 k=1 k n C (k) (X V ),", "eq_num": "(2)" } ], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "where n is a size of set V and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "C (k) (X V ) = n k n k A\u2286V,|A|=k H(X A ) \u2212 H(X V ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Using Information Theory for Text Complexity", "sec_num": null }, { "text": "Curriculum learning is often apprised for the speedup of the model's convergence. The intuition here is to provide a curriculum that would help to achieve the same result faster, yet without a significant loss in quality. We carried out several experiments to see if data-based curricula could speed up the learning in transformer-based language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.1 Convergence Speed", "sec_num": null }, { "text": "Tables 2 3 show average number of training steps needed to reach 90% of the resulting accuracy for the corresponding classification task. On Senti-ment140 TF-IDF, TSE, and maximum word rank speed the convergence up to 3% with some samplers. However, other metrics or sampling strategies slow down the model's convergence speed, while on a bigger HND dataset, other curricula show results better than the baseline. One could conclusively say that length is the worse metric to organize curriculum in all experiment configurations. The one more important conclusion is that the model can not always estimate the complexity of the sample concerning its' internal state (MLM-loss does not speed up the training speed and drawdown the final model quality on the Sen-timent140 dataset). This happens when the model is expressive enough, and all samples have equal complexity in model-based metrics. Figure 2 shows a significant slowdown in model convergence speed can be seen for all curricula compared to the baseline learning regime. One can also divide all metrics into two distinct groups. The first one consists of maximum word rank and TF-IDF. The second group includes EE, TSE, likelihood, and length. The metrics in the first group allow the model to converge to a lower loss value. However, the second group's metrics hinder the convergence and seem to have higher saturation loss. Hence, it isn't easy to find a universal threshold to reasonably compare all metrics and samplers. One should also note that only maximum word rank does not degrade the model quality compared to the baseline, while other curricula cause severe deterioration. Finally, the last main observation is that curriculum learning, unfortunately, does not allow us to run MLM faster. Moreover, the number of training steps needed to reach a given threshold could be several times higher in comparison with the baseline approach. Figure 3 shows that data-driven curricula do not have a significant influence on the results.", "cite_spans": [], "ref_spans": [ { "start": 893, "end": 901, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 1905, "end": 1913, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "C.1.1 Classification", "sec_num": null }, { "text": "Comparing Figure 3 with Tables 3 -2 one could see that data-based curricula are hardly beneficial even for smaller architectures. Rather, under certain conditions, one could get some improvement of convergence, yet on a different task, the same choice of complexity measure and sampling strategy would be on par with the baseline.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 18, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "C.1.2 Pretraining MLM", "sec_num": null }, { "text": "Extensive experiments on different NLP tasks show that data-based curriculum learning does not help to increase quality with default hyperparameters. Hyperparameters' importance for the curriculum is an open question. Some papers state that hyperparameters, especially learning rate, are essential for curriculum (Zhang et al., 2018) . On the other hand, some papers propose methods that are not highly sensitive to hyperparameters (Platanios et al., 2019) . It seems that hyperparameters choice is discussed mainly in the works addressing NMT, so we run additional experiments with our curricula and three different learning rates (10 \u22123 , 10 \u22124 , 10 \u22125 ) on NMT as well. Results demonstrate that models' behavior does not depend on the learning rate much, and for every learning rate, curricula do not give a significant quality increase. Results for excess entropy are presented in Figure 6 . : The average number of steps needed to reach given threshold for all configurations metric-sampler on pretraining on BooksCorpus dataset. Maximal deviation for 3 runs is less than 3k steps. All complexity measures based curricula reach saturation at higher losses than the baseline thus we used an arbitrary threshold of 3.5 for them. Results better than the baseline are highlighted. \u221e means that model did not reach the threshold, '-' denotes the cases when complexity measure and sampler are not compatible. Figure 4 : Test results for NMT on WMT16 with different learning rates with excess entropy as a complexity measure (a) learning rate 10 \u22123 (b) learning rate 10 \u22124 (c) learning rate 10 \u22125 Figure 5 : Test results for NMT on WMT16 with different learning rates with TSE as a complexity measure 127 (a) learning rate 10 \u22123 (b) learning rate 10 \u22124 (c) learning rate 10 \u22123 Figure 6 : Test results for NMT on WMT16 with different learning rates with length complexity measure 128", "cite_spans": [ { "start": 313, "end": 333, "text": "(Zhang et al., 2018)", "ref_id": null }, { "start": 432, "end": 456, "text": "(Platanios et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 885, "end": 893, "text": "Figure 6", "ref_id": null }, { "start": 1408, "end": 1416, "text": "Figure 4", "ref_id": null }, { "start": 1595, "end": 1603, "text": "Figure 5", "ref_id": null }, { "start": 1775, "end": 1783, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "C.3 Data-based curricula and Hyperparameters", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An information-theoretic perspective of tf-idf measures", "authors": [ { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2003, "venue": "Information Processing & Management", "volume": "39", "issue": "1", "pages": "45--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akiko Aizawa. 2003. An information-theoretic perspec- tive of tf-idf measures. Information Processing & Management, 39(1):45-65.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Teaching by examples: Implications for the process of category acquisition", "authors": [ { "first": "Judith", "middle": [], "last": "Avrahami", "suffix": "" }, { "first": "Yaakov", "middle": [], "last": "Kareev", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bogot", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Caspi", "suffix": "" }, { "first": "Salomka", "middle": [], "last": "Dunaevsky", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Lerner", "suffix": "" } ], "year": 1997, "venue": "The Quarterly Journal of Experimental Psychology Section A", "volume": "50", "issue": "3", "pages": "586--606", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith Avrahami, Yaakov Kareev, Yonatan Bogot, Ruth Caspi, Salomka Dunaevsky, and Sharon Lerner. 1997. Teaching by examples: Implications for the process of category acquisition. The Quarterly Journal of Experimental Psychology Section A, 50(3):586-606.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A unifying framework for complexity measures of finite systems", "authors": [ { "first": "Nihat", "middle": [], "last": "Ay", "suffix": "" }, { "first": "Eckehard", "middle": [], "last": "Olbrich", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Bertschinger", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Jost", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ECCS", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nihat Ay, Eckehard Olbrich, Nils Bertschinger, and J\u00fcr- gen Jost. 2006. A unifying framework for complexity measures of finite systems. In Proceedings of ECCS, volume 6. Citeseer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Curriculum learning", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "J\u00e9r\u00f4me", "middle": [], "last": "Louradour", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th annual international conference on machine learning", "volume": "", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning, pages 41-48.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" }, { "first": "Sandhini", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Herbert-Voss", "suffix": "" }, { "first": "Gretchen", "middle": [], "last": "Krueger", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Henighan", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Ramesh", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Winter", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hesse", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Sigler", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Litwin", "suffix": "" } ], "year": null, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning and development in neural networks: The importance of starting small", "authors": [ { "first": "", "middle": [], "last": "Jeffrey L Elman", "suffix": "" } ], "year": 1993, "venue": "Cognition", "volume": "48", "issue": "1", "pages": "71--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71-99.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Curriculum learning and minibatch bucketing in neural machine translation", "authors": [ { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "379--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2017. Curriculum learn- ing and minibatch bucketing in neural machine trans- lation. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 379-386, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hpc resources of the higher school of economics", "authors": [ { "first": "", "middle": [], "last": "Ps Kostenetskiy", "suffix": "" }, { "first": "V", "middle": [ "I" ], "last": "Chulkevich", "suffix": "" }, { "first": "", "middle": [], "last": "Kozyrev", "suffix": "" } ], "year": 2021, "venue": "Journal of Physics: Conference Series", "volume": "1740", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "PS Kostenetskiy, RA Chulkevich, and VI Kozyrev. 2021. Hpc resources of the higher school of economics. In Journal of Physics: Conference Series, volume 1740, page 012050. IOP Publishing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Curriculum learning for reinforcement learning domains: A framework and survey", "authors": [ { "first": "Sanmit", "middle": [], "last": "Narvekar", "suffix": "" }, { "first": "Bei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Leonetti", "suffix": "" }, { "first": "Jivko", "middle": [], "last": "Sinapov", "suffix": "" }, { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "181", "pages": "1--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E Taylor, and Peter Stone. 2020. Curriculum learning for reinforcement learning do- mains: A framework and survey. Journal of Machine Learning Research, 21(181):1-50.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Competence-based curriculum learning for neural machine translation", "authors": [ { "first": "Otilia", "middle": [], "last": "Emmanouil Antonios Platanios", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Stretcu", "suffix": "" }, { "first": "Barnabas", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Poczos", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1162--1172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162-1172, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. Journal of Machine Learning Research, 21:1- 67.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Curriculum learning with diversity for supervised computer vision tasks", "authors": [], "year": null, "venue": "MRC@ECAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petru Soviany. 2020. Curriculum learning with di- versity for supervised computer vision tasks. In MRC@ECAI.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A measure for brain complexity: relating functional segregation and integration in the nervous system", "authors": [ { "first": "Giulio", "middle": [], "last": "Tononi", "suffix": "" }, { "first": "Olaf", "middle": [], "last": "Sporns", "suffix": "" }, { "first": "Gerald M", "middle": [], "last": "Edelman", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the National Academy of Sciences", "volume": "91", "issue": "11", "pages": "5033--5037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giulio Tononi, Olaf Sporns, and Gerald M Edelman. 1994. A measure for brain complexity: relating func- tional segregation and integration in the nervous sys- tem. Proceedings of the National Academy of Sci- ences, 91(11):5033-5037.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using complexity measures in information retrieval", "authors": [ { "first": "", "middle": [], "last": "Frans Van Der Sluis", "suffix": "" }, { "first": "", "middle": [], "last": "Egon L Van Den Broek", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the third symposium on information interaction in context", "volume": "", "issue": "", "pages": "383--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frans van der Sluis and Egon L van den Broek. 2010. Using complexity measures in information retrieval. In Proceedings of the third symposium on information interaction in context, pages 383-388.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2020 Conference on Empirical", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "(a) Sentiment140 with sort-merge sampler for all complexity measures.(b) Sentiment140 with max word rank complexity measure for all samplers.(c) Hyperpartisan News with sort-shuffle samples for all complexity measures (d) Hyperpartisan News with max word rank complexity measure for all samplers.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Pre-trained BERT fine-tuned on Sentiment140 and Hyperpartisan News Detection datasets. Accuracy of the classifier as a function of the number of training steps.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "Loss function dependency on the number of training steps on MLM for BooksCorpus dataset during the first 40k steps of training. Every plot depicts results for six different complexity estimates combined with a specific sampler.", "type_str": "figure", "num": null }, "FIGREF3": { "uris": null, "text": "Sentiment140 with length as complexity metric and three samplers.(b) Sentiment140 with TSE as complexity metric and three samplers.", "type_str": "figure", "num": null }, "FIGREF4": { "uris": null, "text": "Test results with LSTM on Sentiment140 dataset. Accuracy of the classifier as a function of the number of training steps.", "type_str": "figure", "num": null }, "TABREF0": { "num": null, "html": null, "content": "
MetricsSamplers
CB DB Hyp SSSM
baseline18.3
length10.1 17.4 16.3--
TSE EE10.3 18.4 16.8 13.8 14.8 10.2 18.2 16.9 13.3 15.0
competence-based sampling is beneficial for re-
current neural networks, we could not reproduce
this result in transformer-based architectures. We
also run experiments to check whether data-based
curricula could work on non-transformer architec-
tures. The results do not look encouraging; see
Appendix C.2.
", "type_str": "table", "text": "The average BLEU score from 50k to 100k steps on WMT16 dataset. Results better than the baseline are highlighted. '-' denotes the cases when complexity measure and sampler are not compatible." }, "TABREF1": { "num": null, "html": null, "content": "
Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understand-ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095-6104.
Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat.
", "type_str": "table", "text": "Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics." }, "TABREF2": { "num": null, "html": null, "content": "
C.2 Data-based Curricula for Other Architectures
It seems that data-based curriculum learning can-
not increase quality or reduce training time for
transformer-based models. Though Platanios et al.
(2019) report that competence-based sampling is
beneficial for recurrent neural networks, we could
not reproduce this result in transformer-based ar-
", "type_str": "table", "text": "illustrates this fact." }, "TABREF3": { "num": null, "html": null, "content": "
MetricsThreshold AccuracySamplers
CBDBHypSSSM
baseline92.9%93.8%22k
length92.9%93.7%55k23k22.5k--
TF-IDF TSE92.9% 92.9%93.5% 93.8%\u221e 56.5k 21k 19.5k 24k 23k23.5k 33k 22k 31k
EE max wr likelihood MLM-loss92.9% 92.9% 92.9% 92.9%93.8% 93.6% 93.8% 93.9%71.5k 25.5k 22.5k 19.5k 32.5k \u221e 22k 20.5k 22.5k 39k \u221e 20k 24k 30k 20k 23.5k 18k 23k 24k 20k
", "type_str": "table", "text": "The average number of steps needed to reach given threshold for all configurations metric-sampler on text classification task on Hyperpartisan News Detections dataset. Maximal deviation for 3 runs is less than 3k steps. Results better than the baseline are highlighted. \u221e means that model did not reach the threshold, '-' denotes the cases when complexity measure and sampler are not compatible." }, "TABREF4": { "num": null, "html": null, "content": "
MetricsThreshold AccuracySamplers
CBDBHypSSSM
baseline85.5%87%17.5k
length85.5%86.2%112.5k 20k19k--
TF-IDF TSE EE85.5% 85.5% 85.5%86.7% 86.8% 86.7%115.5k 21.5k 19.5k 16.5k 22k 95.5k 16.5k 20.5k 21.5k 18k 59k 19.3k 23k 20k 19k
max wr likelihood85.5% 85.5%86.7% 86.7%70k 112k 17.5k 21.5k 17.5k 21.5k 19k 18.5k 19.5k 17k
MLM-loss85.5%86.1%59.5k21k 23.5k 19.5k 20k
", "type_str": "table", "text": "The average number of steps needed to reach given threshold for all configurations metric-sampler on text classification task on sentiment140 dataset. Maximal deviation for 3 runs is less than 3k steps. Results better than the baseline are highlighted. \u221e means that model did not reach the threshold, '-' denotes the cases when complexity measure and sampler are not compatible." }, "TABREF5": { "num": null, "html": null, "content": "", "type_str": "table", "text": "" } } } }