ACL-OCL / Base_JSON /prefixI /json /insights /2021.insights-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
160 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:12:39.782345Z"
},
"title": "Learning Data Augmentation Schedules for Natural Language Processing",
"authors": [
{
"first": "Daphn\u00e9",
"middle": [],
"last": "Chopard",
"suffix": "",
"affiliation": {},
"email": "chopardda@cardiff.ac.uk"
},
{
"first": "Matthias",
"middle": [
"S"
],
"last": "Treder",
"suffix": "",
"affiliation": {},
"email": "trederm@cardiff.ac.uk"
},
{
"first": "Irena",
"middle": [],
"last": "Spasi\u0107",
"suffix": "",
"affiliation": {},
"email": "spasici@cardiff.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Despite its proven efficiency in other fields, data augmentation is less popular in the context of natural language processing (NLP) due to its complexity and limited results. A recent study (Longpre et al., 2020) showed for example that task-agnostic data augmentations fail to consistently boost the performance of pretrained transformers even in low data regimes. In this paper, we investigate whether datadriven augmentation scheduling and the integration of a wider set of transformations can lead to improved performance where fixed and limited policies were unsuccessful. Our results suggest that, while this approach can help train better models in some settings, the improvements are unsubstantial. This negative result is meant to help researchers better understand the limitations of data augmentation for NLP.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Despite its proven efficiency in other fields, data augmentation is less popular in the context of natural language processing (NLP) due to its complexity and limited results. A recent study (Longpre et al., 2020) showed for example that task-agnostic data augmentations fail to consistently boost the performance of pretrained transformers even in low data regimes. In this paper, we investigate whether datadriven augmentation scheduling and the integration of a wider set of transformations can lead to improved performance where fixed and limited policies were unsuccessful. Our results suggest that, while this approach can help train better models in some settings, the improvements are unsubstantial. This negative result is meant to help researchers better understand the limitations of data augmentation for NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, data augmentation has become an integral part of many successful deep learning systems, especially in the fields of computer vision and speech processing (Krizhevsky et al., 2012; Jaitly and Hinton, 2013; Hannun et al., 2014; Ko et al., 2015) . Traditionally, data augmentation approaches take the form of label-preserving transforms that can be applied to the training datasets to expand their size and diversity. The idea of generating synthetic samples that share the same underlying distribution as the original data is often considered a pragmatic solution to the shortage of annotated data, and has been shown to reduce overfitting and improve generalisation performance (Shorten and Khoshgoftaar, 2019) . However, despite a sound theoretical foundation (Dao et al., 2019) , this paradigm has not yet translated to consistent and substantial improvement in natural language processing (NLP) (Longpre et al., 2020) .",
"cite_spans": [
{
"start": 171,
"end": 196,
"text": "(Krizhevsky et al., 2012;",
"ref_id": "BIBREF24"
},
{
"start": 197,
"end": 221,
"text": "Jaitly and Hinton, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 222,
"end": 242,
"text": "Hannun et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 243,
"end": 259,
"text": "Ko et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 694,
"end": 726,
"text": "(Shorten and Khoshgoftaar, 2019)",
"ref_id": "BIBREF36"
},
{
"start": 777,
"end": 795,
"text": "(Dao et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 914,
"end": 936,
"text": "(Longpre et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The inability of NLP models to consistently benefit from data augmentations can be partially attributed to the general difficulty of finding a good combination of transforms and determining their respective set of optimal hyperparameters (Ratner et al., 2017) , a problem that is exacerbated in the context of text data. Indeed, since the complexity of language makes text highly sensitive to any transformations, data augmentations are often tailored to a specific task or dataset and are only shown to be successful in specific settings.",
"cite_spans": [
{
"start": 238,
"end": 259,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we investigate whether automatically searching for an optimal augmentation schedule from a wide range of transformations can alleviate some of the shortcomings encountered when applying data augmentations to NLP. This endeavour follows the recent success of automated augmentation strategies in computer vision (Cubuk et al., 2019; Ho et al., 2019; Cubuk et al., 2020) . In doing so, we extend the efforts to understand the limits of data augmentation in NLP.",
"cite_spans": [
{
"start": 325,
"end": 345,
"text": "(Cubuk et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 346,
"end": 362,
"text": "Ho et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 363,
"end": 382,
"text": "Cubuk et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although there exist recent surveys (Feng et al., 2021; Shorten et al., 2021) that offer a comprehensive review of related work, they do not provide a comparative analysis of the different data augmentation approaches and of their effect on learning performance. In general, the literature lacks general comparative studies that encompass the variety of tasks and datasets in NLP. Indeed, most of the existing text data augmentation studies either focus on a single approach in a specific setting or compare a small set of techniques on a specific task and dataset (Giridhara. et al., 2019; Marivate and Sefara, 2020) . In addition, many of these comparisons have been conducted before the widespread adoption of contextualized representations.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Feng et al., 2021;",
"ref_id": null
},
{
"start": 56,
"end": 77,
"text": "Shorten et al., 2021)",
"ref_id": "BIBREF37"
},
{
"start": 565,
"end": 590,
"text": "(Giridhara. et al., 2019;",
"ref_id": null
},
{
"start": 591,
"end": 617,
"text": "Marivate and Sefara, 2020)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Longpre et al. (2020) showed that, despite careful calibration, data augmentation yielded little to no improvement when applied to pretrained transformers even in low data regimes. While their comparative analysis is conducted on various classification tasks, it focuses on a limited set of aug-mentation strategies that are applied independently and whose hyperparameters are optimized via a random search.",
"cite_spans": [
{
"start": 10,
"end": 31,
"text": "Longpre et al. (2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To further assess the effectiveness of data augmentation on pretrained transformers, we investigate in this work the paradigm of learning data augmentation strategies from data in the context NLP. The idea is to leverage the training data to automatically discover an optimal combination of augmentations and their hyperparameters at each epoch of the fine-tuning. First, we define a search space that consists of a variety of transformations before relying on the training data to learn an optimal schedule of probabilities and magnitudes for each augmentation. This schedule is later used to boost performance during fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the following section, we present a variety of data augmentations and, for each category of transformations, we highlight a subset of augmentations that is representative of the category and that will constitute our search space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this study, we focus on transformative methods which apply a label-preserving transformation to existing data-rather than generative methods which create entirely new instances using generative models. Indeed their simplicity of use and their low computational cost make them good candidates for a wide deployment (Xu et al., 2016) . In the last decade, we witnessed a widespread adoption of continuous vector representations of words, which can be easily fed to deep neural network architectures. As a result, transforms have been developed not only at the lexical level (i.e., words) but also at the latent semantic level (i.e., embeddings). This distinction is emphasized throughout this section.",
"cite_spans": [
{
"start": 317,
"end": 334,
"text": "(Xu et al., 2016)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Word Replacement A commonly used form of data augmentation in NLP is word replacement. At the lexical level, the most common approach consists in randomly replacing words with their synonyms (Zhang et al., 2015; Mueller and Thyagarajan, 2016; . Some variants include replacement based on other lexical relationships such as hypernymy (Navigli and Velardi, 2003) or simply words from the vocabulary (Wang et al., 2018; Cheng et al., 2018) . Another popular approach consists in using a language model (LM) for replacement (Kolomiyets et al., 2011; Fadaee et al., 2017; Ratner et al., 2017) . Because these transformations do not ensure the preservation of the sample class, Kobayashi (2018) suggested conditioning a bidirectional LM on the labels, an idea later revisited by Wu et al. (2019) who replaced the LM with a conditional BERT (Bidirectional Encoder Representations from Transformers).",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Zhang et al., 2015;",
"ref_id": "BIBREF53"
},
{
"start": 212,
"end": 242,
"text": "Mueller and Thyagarajan, 2016;",
"ref_id": "BIBREF32"
},
{
"start": 334,
"end": 361,
"text": "(Navigli and Velardi, 2003)",
"ref_id": "BIBREF33"
},
{
"start": 398,
"end": 417,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 418,
"end": 437,
"text": "Cheng et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 521,
"end": 546,
"text": "(Kolomiyets et al., 2011;",
"ref_id": "BIBREF23"
},
{
"start": 547,
"end": 567,
"text": "Fadaee et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 568,
"end": 588,
"text": "Ratner et al., 2017)",
"ref_id": "BIBREF34"
},
{
"start": 673,
"end": 689,
"text": "Kobayashi (2018)",
"ref_id": "BIBREF22"
},
{
"start": 774,
"end": 790,
"text": "Wu et al. (2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "At the latent semantic level, word replacement amounts to randomly replacing its embedding with some other vector. For instance, Wang and Yang (2015) choose the k-nearest-neighbour in the embedding vocabulary as a replacement for each word. Similar strategies were later adopted by and Zhang et al. (2019) .",
"cite_spans": [
{
"start": 129,
"end": 149,
"text": "Wang and Yang (2015)",
"ref_id": "BIBREF41"
},
{
"start": 286,
"end": 305,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "In this study, as replacement methods, we select both synonym and hypernym replacement as well as contextual augmentation at the word level and nearest neighbour at the embedding level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Noising A simple yet effective form of augmentation that is often applied to images and audio samples is data noising. Not surprisingly, this type of data augmentation can also be found in NLP despite the discrete nature of text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "In its simplest form, data noising when applied to text consists in inserting, swapping or deleting words at random (Wei and Zou, 2019) . More generally, the process of ignoring a fraction of the input words is often referred to as word dropout (Iyyer et al., 2015) and can take multiple forms (Dai and Le, 2015; Zhang et al., 2016; Bowman et al., 2016; Xie et al., 2017; .",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF44"
},
{
"start": 245,
"end": 265,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 294,
"end": 312,
"text": "(Dai and Le, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 313,
"end": 332,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 333,
"end": 353,
"text": "Bowman et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 354,
"end": 371,
"text": "Xie et al., 2017;",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Sometimes, word replacement can also be thought of as a form of noising. For instance, replacing words at random with other words from the vocabulary introduces noise into the data (Xie et al., 2017; Cheng et al., 2018) .",
"cite_spans": [
{
"start": 181,
"end": 199,
"text": "(Xie et al., 2017;",
"ref_id": "BIBREF47"
},
{
"start": 200,
"end": 219,
"text": "Cheng et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "In contrast, at the distributed representation level, this type of augmentation often takes the form of added noise to the embeddings. Possible noising schemes include Gaussian noise (Kumar et al., 2016; Cheng et al., 2018) , uniform noise (Kim et al., 2019) , Bernoulli noise and adversarial noise (Zhang and Yang, 2018) . Typically, noising is applied to every word embedding, but it can also be applied only to selected ones (Kim et al., 2019) . Alternatively, as with word dropout, noise can be incorporated into the training by discarding, across all words, some embedding dimensions with a predefined probability (Dai and Le, 2015).",
"cite_spans": [
{
"start": 183,
"end": 203,
"text": "(Kumar et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 204,
"end": 223,
"text": "Cheng et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 240,
"end": 258,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 299,
"end": 321,
"text": "(Zhang and Yang, 2018)",
"ref_id": "BIBREF51"
},
{
"start": 428,
"end": 446,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Noising strategies used in this study are based on random deletion, random swap and random insertion at the sentence level as well as Gaussian noise and uniform noise at the feature level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Back-translation Back-translation provides a way for neural machine translation (NMT) systems to leverage monolingual data to increase the amount of parallel data (Sennrich et al., 2016; Edunov et al., 2018; Fadaee and Monz, 2018) . Similarly, back-translation can be applied twice in a row (i.e., from English to another language and back to English) to generate new data points without the need for parallel corpora and can therefore find applications as a task-agnostic augmentation in other tasks such as text classification (Luque and P\u00e9rez, 2018; Aroyehun and Gelbukh, 2018), paraphrase generation (Mallinson et al., 2017) and question answering (Yu et al., 2018) .",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "(Sennrich et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 187,
"end": 207,
"text": "Edunov et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 208,
"end": 230,
"text": "Fadaee and Monz, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 604,
"end": 628,
"text": "(Mallinson et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 652,
"end": 669,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "Here, we consider a wide range of intermediate languages to include back-translation into our search space",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation in NLP",
"sec_num": "3"
},
{
"text": "In NLP, little effort has been put into developing strategies that can, given a task and a dataset, learn an optimal subset of data augmentation operations and their hyperparameters (Shorten et al., 2021 ). Yet, this idea has been very successful in computer vision (Cubuk et al., 2019; Ho et al., 2019; Cubuk et al., 2020) . For instance, in the context of image classification, Cubuk et al. (2019) have proposed AutoAugment a procedure that automatically searches for optimal augmentation policies using reinforcement learning. Later, Population-Based Augmentation (PBA) (Ho et al., 2019)-an algorithm that views the data augmentation selection and calibration problem as a hyperparameter search and can thus leverage the Population-Based Training (PBT) method (Jaderberg et al., 2017) to find an optimal transformation schedule in an efficient way-was introduced as a more cost-effective yet competitive alternative to AutoAugment. Finally, Cubuk et al. (2020) introduced the RandAugment method which tackles some of the issues arising from these previous works.",
"cite_spans": [
{
"start": 182,
"end": 203,
"text": "(Shorten et al., 2021",
"ref_id": "BIBREF37"
},
{
"start": 266,
"end": 286,
"text": "(Cubuk et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 287,
"end": 303,
"text": "Ho et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 304,
"end": 323,
"text": "Cubuk et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 763,
"end": 787,
"text": "(Jaderberg et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Data Augmentation in NLP",
"sec_num": "4"
},
{
"text": "In this study, we adapt the PBA framework to NLP in an attempt to learn an optimal schedule of data augmentation operations with optimized hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automated Data Augmentation in NLP",
"sec_num": "4"
},
{
"text": "Given a hyperparameter search space that consists of data augmentation operations along with their probability level (i.e., likelihood of being applied) and their magnitude level (i.e., strength with which they are applied), PBA works as follows: during a pre-defined number of epochs, k child models of identical architecture are trained in parallel for the task at hand on a given dataset. Periodically, the training is interrupted, all models are evaluated on a validation set and an \"exploit-and-explore\" procedure takes place. First, the worst-performing models (bottom 25%) copy the weights and hyperparameters of the best-performing models (top 25%) (exploit), then the hyperparameters are either slightly perturbed or uniformly resampled from all possible values (explore). At that point, training can continue. At the end of the training, a data augmentation policy schedule is extracted from the hyperparameters of the best performing child model. The obtained schedule can then be used to train from scratch a different model on the same task and the same dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Population-Based Augmentation",
"sec_num": "4.1"
},
{
"text": "Our implementation builds on the original PBA codes (Ho et al., 2019) . We make the NLP adaptation of this framework publicly available 1 .",
"cite_spans": [
{
"start": 52,
"end": 69,
"text": "(Ho et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Two of the datasets suggested by Longpre et al. (2020) in their comparative analysis are used to conduct our experiments: SST-2 (Socher et al., 2013) and MNLI (Williams et al., 2018) . The former is a corpus of movie reviews used for sentiment analysis (single sentence, binary classification), whereas the latter is a natural language inference corpus (two sentences, multi-class classification). Accuracy and mis-matched accuracy respectively are used for evaluation. We take N = {1500, 2000, 3000, 9000} samples from the original train sets to constitute our train-validation sets. As test sets, we use the full original validation sets which-unlike the original test sets-contain publicly available ground-truths.",
"cite_spans": [
{
"start": 33,
"end": 54,
"text": "Longpre et al. (2020)",
"ref_id": "BIBREF27"
},
{
"start": 128,
"end": 149,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF38"
},
{
"start": 159,
"end": 182,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "The hyperparameter space consists of 10 data augmentation operations highlighted in the previous section with associated probability and magnitude. For most replacement methods, the magnitude level can be thought of as a percentage of tokens on which the transformation is applied and, for noising transforms, it corresponds to the amount of noising. In the context of back-translation, however, magnitude relates to the quality of the translation according to BLEU-3 scores (Aiken, 2019). Details concerning the implementation of these augmentations as well as how exactly magnitude is defined for each one of them can be found in Appendix A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Space",
"sec_num": "5.2"
},
{
"text": "The search is conducted on 48 epochs using around 20% of the N data points for training and the rest for validation. Both the child models and the final model follow the original uncased BERT base architecture suggested by Devlin et al. (2019) . The learning rate is chosen so as to slow down the finetuning without affecting the performance. Further details are provided in Appendix A.2.",
"cite_spans": [
{
"start": 223,
"end": 243,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search",
"sec_num": "5.3"
},
{
"text": "The main results can be found in Table 1a and Table 1b . The first row corresponds to the performance on the test set when training the model on all N samples without any augmentation. The second row in the table contains the result of the evaluation on the test set of the model trained on all N samples using the discovered schedules (i.e., for each data size, the schedule of the child model with the highest validation accuracy at the end of the search is used to train the final model). The same slowed-down learning rate and the same extended number of epochs are used across all experiments to allow for a fair comparison. Overall, the improvements yielded by the optimized data augmentation schedules are inconsistent and unsubstantial (below 0.8%). Even though the incorporation of transforms has a small positive impact on the SST-2 dataset, it has the opposite effect on MNLI (i.e., the scores plummet by as much as 1.39%). A possible reason for these poor results might be due to the difference in settings between our experiments and the ones in the PBA study. Indeed, our search is conducted on 48 epochs as opposed to the 160 to 200 epochs suggested for image classification tasks and the exploit-and-explore procedure takes place after each epoch rather than after every 3 epochs. In addition, the size of the training datasets is very different. The size of our largest experiment is roughly the same as the size of the smallest dataset in (Ho et al., 2019) .",
"cite_spans": [
{
"start": 1458,
"end": 1475,
"text": "(Ho et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 33,
"end": 55,
"text": "Table 1a and Table 1b",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Surprisingly, our data-driven search seems unable to reproduce the performance boost reported by Longpre et al. (2020) on the MNLI dataset, even though the augmentations considered in their work are part of our search space. This might be explained by the way augmentations are applied. In our study, we transform each training sample by applying up to 2 transformations in a row with probability p and magnitude m. In contrast, Longpre et al. add N \u00d7 \u03c4 augmented samples to the training set with \u03c4 \u2208 {0.5, 1, 1.5, 2}, meaning that the original examples are provided along with their augmented counterparts at each iteration.",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "Longpre et al. (2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Given the stochastic nature of the searching process, the discovered schedule is bound to differ from one run to another. So far, we have run a single search for each experiment setting. In this section, we investigate whether the limited effect of automatic augmentation on the model performance may be caused by the stochasticity of the search. To that end, we run 10 independent searches on the SST-2 dataset with N=1500 and use each of the 10 discovered schedules to train a separate model. All the network hyperparameters are kept the same as in the previous section. Overall, the standard deviation over 10 independent schedules is 0.55%, which indicates that the performance of the training is robust across searches. Thus, the poor results observed in the previous section cannot be explained by the variability of schedules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Robustness",
"sec_num": "6.1"
},
{
"text": "However, a closer look at these 10 individual schedules reveals that the chosen augmentation hyperparameters are very different from one run to another and that the search does not seems to favour any particular set of augmentation transforms. This may indicate that, in this setting, data augmentation acts more as a regulariser rather than a way to learn invariance properties and that, as a result, any kind of augmentation transform has a similar effect on performance. In view of these findings, it would be interesting to explore whether relying on a greater number of child models during the search could potentially yield less disparate schedules and improve the overall quality of the search. For the interested reader, the various schedules are displayed in Appendix A.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Robustness",
"sec_num": "6.1"
},
{
"text": "As mentioned earlier, the limited impact of automatic data augmentation scheduling in our settings might be due to the small number of samples available for each experiment. In particular, one of the drawbacks of PBA is that a large portion of training data (approximately 80% as suggested by Ho et al. (2019)) has to be set aside to form a validation set that is used during the search to find optimal hyperparameters. For example, at N=1500 only 250 examples are used to learn the network weights during the search while the remaining 1250 samples are used for hyperparameter selection. As a result of this discrepancy, the selected data augmentation might be relevant when only 250 data points are available for training but less effective when learning with 1500 samples as is ultimately the case. In this section, we investigate whether the poor results observed in Table 1 Table 2 : Performance on the SST-2 test set of the model trained on N =1500 samples with the schedule discovered using different proportions of validation and training sets. For each split ratio, the model is trained 10 times using the schedule yielded by a single search. The mean accuracy and standard deviation are reported.",
"cite_spans": [],
"ref_spans": [
{
"start": 871,
"end": 878,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 879,
"end": 886,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Validation size",
"sec_num": "6.2"
},
{
"text": "chosen to split the available data into a train and a validation set. To that end, we run the search on the SST-2 dataset using different ratios to divide the N=1500 samples at hand. The results reported in Table 2 suggest that using different proportions of train and validation examples does not affect the effectiveness of the augmentation schedule in this setting. In fact, the performance remains the same even though the model is trained with schedules that were optimized using very different split ratios. This might be explained by the fact that both the train and the validation sets are too small to find optimal augmentation hyperparameters irrespective of the chosen split ratio. Alternatively, it is possible that the chosen dataset can simply not benefit from augmentation because of its nature. To verify this hypothesis, it would be interesting to extend this analysis to a wider range of datasets, including actual low-resource datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 214,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Validation size",
"sec_num": "6.2"
},
{
"text": "The results suggest that augmentation schedules and data-driven parameter search do not provide a consistent and straightforward way to improve the performance of NLP models that use pretrained transformers. There are a few possible explanations for this phenomenon. First, the overall setup of the PBA approach (e.g., the need for large validation sets) might not be well suited for low-data regimes in NLP. A second but more likely reason is that transformers are already pre-trained on huge datasets and their representations may already be invariant to many of the transformations that are encoded into the data augmentation. A systematic investigation into the latter hypothesis is required, which, if proven, would show that data augmentation may be redundant when opting to use transformers to implement NLP solutions. A final reason might be that the search space we consider only contains transformative data augmentation techniques and omits generative ones, even though the latter have started to show some promising results. dicted by a language model (LM) conditioned on the labels. In this work, we use the implementation provided by Wu et al. (2019) which uses BERT as a conditional masked language model. At the beginning of the search, the model is finetuned on the training data on a task that applies extra label-conditional constraint to the traditional masked language objective. Once fine-tuned, the model can be used to infer masked words given a label. When applied to sample s i , this operation replaces n i = m i \u2022 |s i | of the tokens with a mask, where |s i | is the number of tokens in the sample after applying the BERT tokenizer. Then, along with its label, the masked sample is fed to the fine-tuned conditional model which infers a vocabulary word for each of the masked tokens. These predictions are used as a replacement.",
"cite_spans": [
{
"start": 1148,
"end": 1164,
"text": "Wu et al. (2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Random Insertion When applied to a sample s i , the random insertion operation randomly adds a token to the sample. The number of tokens to insert n i is set to a fraction of the length of s i , namely",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "n i = m i \u2022 |s i | ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "where |s i | is the number of tokens in s i andm i = 0.25 \u2022 m i is the scaled-down magnitude which ensures the number of inserted tokens does not exceed 25% of the original number of tokens and, by extension, that the new sample contains at most 20% of randomly inserted tokens. We include two independent variants: each inserted token is either a synonym of one of the tokens (selected uniformly at random) in s i as suggested by Wei and Zou (2019) as part of their EDA techniques or is sampled uniformly at random from a subset of the BERT vocabulary. Note that we only consider words between index 1996 and 29611 of the vocabulary to exclude special and unused tokens as well as punctuation, digits and tokens with non-English characters. We also ignore tokens that start with \"##\" the special characters used to indicate a trailing WordPiece token. The position for the insertion of the new token in s i is chosen uniformly at a random. The implementation uses the codes provided by Wei and Zou (2019) .",
"cite_spans": [
{
"start": 431,
"end": 449,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF44"
},
{
"start": 987,
"end": 1005,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The random deletion operation removes a fraction of the tokens from the sample. Each token is discarded with probability q i , where q i =m i = 0.25 \u2022 m i which is the magnitude level scaled down between 0 and 0.25 to guarantee that at most half of the tokens are removed. This allows a wide range of values around the original intensity parameter of 0.1 suggested by Wei and Zou (2019) whose implementation we use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random Deletion",
"sec_num": null
},
{
"text": "Random Swap This augmentation swaps any two words from the sample s i at random n i times in a row, where n i = m i * |s i | and |s i | is the number of tokens in sample s i . The magnitude parameter m i is scaled down to have a maximum value of 0.25 to ensure that at most 50% of the words are swapped. Once again, we use the implementation provided by Wei and Zou (2019) .",
"cite_spans": [
{
"start": 354,
"end": 372,
"text": "Wei and Zou (2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Random Deletion",
"sec_num": null
},
{
"text": "The Gaussian noising operation is not applied on the input sequences but rather directly on the contextualized word representations. Let w ij be the embedding of word j in sample s i . Then, each embedding in the sample is transformed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Noising",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w ij = w ij + e j , e jk \u223c N (0, \u03c3 2 ) ,",
"eq_num": "(1)"
}
],
"section": "Gaussian Noising",
"sec_num": null
},
{
"text": "where \u03c3 = m i and e j is a vector of the length of d (embedding dimension) with elements e jk normally distributed with mean 0 and standard deviation m i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Noising",
"sec_num": null
},
{
"text": "Uniform Noising Similarly to Gaussian noising, the uniform noising operation is applied directly on the contextualized embeddings. More specifically, Back-translation The back-translation operation first translates the sample s i to an intermediate language before translating the intermediate translation back to English. To allow us to incorporate this transform into the search space, we relate the magnitude level with the quality of the translation: when back-translation is applied with a low magnitude, the intermediate language used is one that achieves a high BLEU3 score according to Aiken (2019). Similarly, high magnitude settings back-translate samples through a language with a poor BLEU3 score. Table 4 summarizes the languages that can be chosen for each level of magnitude. To generate translations, we use the python library Googletrans which uses the Google Translate Ajax API to make calls.",
"cite_spans": [],
"ref_spans": [
{
"start": 710,
"end": 717,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Gaussian Noising",
"sec_num": null
},
{
"text": "w ij = w ij + e j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Noising",
"sec_num": null
},
{
"text": "At the beginning of training, all probability and magnitude hyperparameters are set to 0. This follows suggestions by Ho et al. (2019) who postulate that little to no augmentation is needed at the beginning of the training, since the model only starts to overfit later on, and the data should increasingly become more diverse throughout the training. Because the complexity of the BERT models lies in the contextual representation layers which are already pre-trained and the task-specific layer that needs to be fine-tuned is rather simple, we keep the architecture identical both for the child models and for the final model. A key difference between applying PBA to image classification and applying PBA to NLP tasks with pre-trained BERT model is that in the former settings models are commonly trained for hundreds of epochs, whereas only two to four epochs of fine-tuning are sufficient in the latter settings. Consequently, the original strategy of running a search for 160 or 200 epochs (depending on the model and dataset) while having exploit-and-explore procedures take place after every 3 epochs is not feasible for NLP tasks with a pretrained BERT model. Hence we modify the learning rate to slow down the fine-tuning process. More specifically, we look through a small grid search for a learning rate that can replicate the performance achieved when using the original parameters (Devlin et al., 2019) but on a larger number of epochs. Thus, by reducing the learning rate, we find a way to carry out the search over a total of 48 epochs. The search is conducted on 16 child models that are trained in parallel. At the beginning of the search, approximately 80% of the training data are set aside to form the validation set, which will be used to periodically assess the performance of the child models. The remaining training data are used to optimize the networks. After each epoch (instead of 3 originally), the exploit-and-explore procedure takes place where the 4 worst performing (on the validation set) child models copy the weights and the parameters of the 4 best performing child models. At the end of the 48 epochs, the augmentation schedule of the model with the highest performance (on the validation set) is extracted.",
"cite_spans": [
{
"start": 118,
"end": 134,
"text": "Ho et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 1394,
"end": 1415,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Search",
"sec_num": null
},
{
"text": "To train the final model, the train and validation data are grouped to form the final training set. Training is conducted using the same learning rate and the same number of epochs as during the search and uses the discovered schedule for augmentation. At the end of the training, the performance of the trained model is evaluated on an independent test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Train",
"sec_num": null
},
{
"text": "The implementation parameters can be found in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "A.4 Implementation details",
"sec_num": null
},
{
"text": "Additional results, tables and figures can be found in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Results and Discussion",
"sec_num": null
},
{
"text": "In the main text, we touched upon the fact that the schedules yielded by the same search can vary significantly from one run to another. For illustration, the schedules yielded by 10 independent searches on the SST-2 dataset with N=1500 samples are displayed in Figure 1 . To reduce the number of figures, the product of the magnitude and the probability hyperparameters through the epochs is shown for each schedule. This figure shows that the optimal set of hyperparameters varies significantly from one search to another. In Figure 2 the average magnitude and probability parameters over the 10 schedules at each epoch is displayed. These plots allow us to realise that, while the parameters generally increase throughout the epochs, the magnitudes and probabilities of each transform have a similar value. Although some operations (e.g. Random Swap) have slightly higher average parameters than others, we can see that no augmentation transform clearly dominates the others. This could indicate that the role of the augmentation operations in this setting is not the one that is expected. Indeed, it seems that the data augmentation merely acts as a regulariser and do not help the network learn any kind of invariance. Figure 1 : The schedules yielded by 10 independent searches on SST-2 with N = 1500 samples (using 250 for training and 1250 for validation during the search). The height of each bar corresponds to the product of the probability and the magnitude parameters at each epoch. Figure 2 : The average probability and magnitude values for the schedules yielded by 10 independent searches on SST-2 with N =1500 samples. The height of each bar corresponds to the average probability and magnitude parameters at each epoch over the 10 schedules.",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Figure 1",
"ref_id": null
},
{
"start": 528,
"end": 536,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1224,
"end": 1232,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1496,
"end": 1504,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.5.1 Schedule Robustness Experiment",
"sec_num": null
},
{
"text": "In this section, we discuss the split ratio experiment in more detail. Overall, the main idea behind using a large validation set is to choose augmentation hyperparameters that do not overfit the validation samples and thus generalize well to unseen data. However, in our case, since the total number of available samples N is small in all experiments, this implies that the size of the training set will be extremely limited. This might hinder the learning process (with too few training examples it can be difficult to learn the optimal network weights) or make the choice of augmentation hyperparameters irrelevant for larger training sets (there is no guarantee that the augmentation chosen for the small train set with will also help when ultimately training with both the training and the validation set). Thus, there exists a clear trade-off between the size of the two sets: while a large validation set can allow for better optimization of the augmentation hyperparameters, a larger training set allows for better optimization of the network weights which, in turn, has an impact on the quality of the augmentation hyperparameters evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5.2 Validation Size Experiment",
"sec_num": null
},
{
"text": "https://github.com/chopardda/LDAS-NLP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "As a reminder we consider the following 11 data augmentation methods: The search space consists of augmentation operations with associated probability and magnitude. More specifically, this can be represented as a vector of 11 tuples (o i , p i , m i ) (i.e., one tuple for each transform). During the training, up to two data augmentation operations o i are drawn uniformly at random for each training sample and applied with probability p i and magnitude m i . As suggested by Ho et al. (2019) , we set the number of operations to 0, 1 and 2 with probabilities 0.2, 0.3 and 0.5 respectively. The operations are applied in the same order in which they are drawn. However, the two embedding-level noising operations (i.e., Gaussian noising and uniform noising) are always applied after the other augmentations since they must be applied in the middle of the graph, after the representation layers, whereas the other augmentations are applied directly on the input of the neural network.\u2022To allow for a smooth parametrisation of the search space with large coverage, probabilities and magnitudes can take any values between 0 and 1: p, m \u2208 [0, 1]. This is different from the original PBA algorithm where the parameters are limited to discrete values. The magnitude level, which represents the intensity with which each operation should be applied, is scaled down differently to fit the different operations. Maximal magnitude values are chosen so as to allow for a wide enough array of impactful values and their specific values for each augmentation are indicated in the corresponding paragraphs.All transformation operations are detailed below, including implementation, and the ones that are applied directly on the input rather than on the embeddings are illustrated in Table 3 .",
"cite_spans": [
{
"start": 479,
"end": 495,
"text": "Ho et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1773,
"end": 1780,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.1 Hyperparameter Search Space",
"sec_num": null
},
{
"text": "The implementation follows the one suggested by the authors of the easy data augmentation (EDA) techniques (Wei and Zou, 2019) and uses the provided codes. First, the number of words that are replaced with one of their synonyms is determined as n i = m i * |s i | , wit\u0125 m the magnitude level scaled down between 0 and 0.25. Then stop-words are removed from the sample. While the number of words replaced is lower than n i , one word is selected uniformly at random among the words that have not been replaced yet and is replaced with one of its synonyms. Synonyms are retrieved with WordNet (Miller, 1998) . Note that since many words have multiple meanings, it is not rare that the chosen synonym carries a different meaning than the original word.",
"cite_spans": [
{
"start": 107,
"end": 126,
"text": "(Wei and Zou, 2019)",
"ref_id": "BIBREF44"
},
{
"start": 592,
"end": 606,
"text": "(Miller, 1998)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym Replacement",
"sec_num": null
},
{
"text": "The process for hypernym replacement is identical to that of synonym replacement in all respect except that hypernyms instead of synonyms are extracted using WordNet.Nearest Neighbour At the beginning of each search (and at the beginning of the final training), we feed every training sample into the pretrained BERT and use the contextualized representation of each token to build a k-d tree. Note that this process is one-time only and is tailored to the train set. When applied to sample s i , the nearest-neighbour operation first tokenizes the sample into WordPiece tokens using the BERT tokenizer before computing the number of tokens that will be replaced as follows:Here, |s i | corresponds to the number of WordPiece tokens in sample s i and m i = 0.25 \u2022 m i is the scaled-down level of magnitude, which has a maximum value of 0.25 so that at most 25% of the tokens are replaced. Then, n i WordPiece tokens at drawn uniformly at random. For each one of them, the 10 tokens with the nearest embeddings (in the context of sample s i ) are retrieved and one of them is selected for replacement using a geometric distribution with parameter q = 0.5. A geometric distribution ensures that the nearest neighbours have a higher chance to be selected as a replacement than the more distant ones. The implementation is based on the one provided by Dale (2020) .",
"cite_spans": [
{
"start": 1348,
"end": 1359,
"text": "Dale (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hypernym Replacement",
"sec_num": null
},
{
"text": "The contextual augmentation transform replaces words by these pre-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Augmentation",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An updated evaluation of google translate accuracy",
"authors": [
{
"first": "",
"middle": [],
"last": "Milam Aiken",
"suffix": ""
}
],
"year": 2019,
"venue": "Studies in linguistics and literature",
"volume": "3",
"issue": "3",
"pages": "253--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milam Aiken. 2019. An updated evaluation of google translate accuracy. Studies in linguistics and literature, 3(3):253-260.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Segun Taofeek Aroyehun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "90--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Segun Taofeek Aroyehun and Alexander Gelbukh. 2018. Aggression detection in social me- dia: Using deep neural networks, data augmen- tation, and pseudo labeling. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 90-97.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twentieth Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Bowman, Luke Vilnis, Oriol Vinyals, An- drew M Dai, Rafal Jozefowicz, and Samy Ben- gio. 2016. Generating sentences from a contin- uous space. In Proceedings of the Twentieth Conference on Computational Natural Language Learning (CoNLL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards robust neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1756--1766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Zhaopeng Tu, Fandong Meng, Jun- jie Zhai, and Yang Liu. 2018. Towards ro- bust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756-1766.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Autoaugment: Learning augmentation strategies from data",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ekin",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Cubuk",
"suffix": ""
},
{
"first": "Dandelion",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Mane",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Vasudevan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "113--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2019. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 113-123.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Randaugment: Practical automated data augmentation with a reduced search space",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ekin",
"suffix": ""
},
{
"first": "Barret",
"middle": [],
"last": "Cubuk",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops",
"volume": "",
"issue": "",
"pages": "702--703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. 2020. Randaugment: Practical au- tomated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702-703.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semi-supervised sequence learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "3079--3087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3079-3087. Curran Associates, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "GitHub repository",
"authors": [
{
"first": "David",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Dale. 2020. GitHub repository.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A kernel theory of modern data augmentation",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Dao",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Virginia",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Chris",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1528--1537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, and Christopher R\u00e9. 2019. A kernel theory of modern data augmentation. In International Conference on Machine Learning, pages 1528-1537. PMLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT (1).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Data augmentation for low-resource neural machine translation",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Fadaee",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "567--573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567-573.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Backtranslation sampling by targeting difficult words in neural machine translation",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Fadaee",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "436--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzieh Fadaee and Christof Monz. 2018. Back- translation sampling by targeting difficult words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 436-446.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for NLP. Findings of ACL",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vosoughi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chan- dar, Soroush Vosoughi, Teruko Mitamura, and Ed- uard Hovy. 2021. A survey of data augmentation approaches for NLP. Findings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A study of various text augmentation techniques for relation classification in free text",
"authors": [],
"year": 2019,
"venue": "Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods",
"volume": "1",
"issue": "",
"pages": "360--367",
"other_ids": {
"DOI": [
"10.5220/0007311003600367"
]
},
"num": null,
"urls": [],
"raw_text": "Praveen Kumar Badimala Giridhara., Chinmaya Mishra., Reddy Kumar Modam Venkataramana., Syed Saqib Bukhari., and Andreas Dengel. 2019. A study of various text augmentation techniques for relation classification in free text. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods -Volume 1: ICPRAM,, pages 360-367. INSTICC, SciTePress.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep speech: Scaling up end-to-end speech recognition",
"authors": [
{
"first": "Awni",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Diamos",
"suffix": ""
},
{
"first": "Erich",
"middle": [],
"last": "Elsen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Prenger",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Shubho",
"middle": [],
"last": "Sengupta",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Coates",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.5567"
]
},
"num": null,
"urls": [],
"raw_text": "Awni Hannun, Carl Case, Jared Casper, Bryan Catan- zaro, Greg Diamos, Erich Elsen, Ryan Prenger, San- jeev Satheesh, Shubho Sengupta, Adam Coates, et al. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Population based augmentation: Efficient learning of augmentation policy schedules",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2731--2741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ho, Eric Liang, Xi Chen, Ion Stoica, and Pieter Abbeel. 2019. Population based augmentation: Ef- ficient learning of augmentation policy schedules. In International Conference on Machine Learning, pages 2731-2741.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1681--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered com- position rivals syntactic methods for text classifica- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol- ume 1, pages 1681-1691.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Population based training of neural networks",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Jaderberg",
"suffix": ""
},
{
"first": "Valentin",
"middle": [],
"last": "Dalibard",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Osindero",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [
"M"
],
"last": "Czarnecki",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Razavi",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Dunning",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.09846"
]
},
"num": null,
"urls": [],
"raw_text": "Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Si- monyan, et al. 2017. Population based training of neural networks. arXiv preprint arXiv:1711.09846.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Vocal tract length perturbation (vtlp) improves speech recognition",
"authors": [
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ICML Workshop on Deep Learning for Audio, Speech and Language",
"volume": "117",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Navdeep Jaitly and Geoffrey E Hinton. 2013. Vo- cal tract length perturbation (vtlp) improves speech recognition. In Proc. ICML Workshop on Deep Learning for Audio, Speech and Language, volume 117.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Data augmentation by data noising for open-vocabulary slots in spoken language understanding",
"authors": [
{
"first": "Hwa-Yeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yoon-Hyung",
"middle": [],
"last": "Roh",
"suffix": ""
},
{
"first": "Young-Gil",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwa-Yeon Kim, Yoon-Hyung Roh, and Young-Gil Kim. 2019. Data augmentation by data noising for open-vocabulary slots in spoken language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 97-102.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Audio augmentation for speech recognition",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Vijayaditya",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. 2015. Audio augmen- tation for speech recognition. In Sixteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Contextual augmentation: Data augmentation by words with paradigmatic relations",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "452--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452- 457.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Model-portability experiments for textual temporal analysis",
"authors": [
{
"first": "Oleksandr",
"middle": [],
"last": "Kolomiyets",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "271--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksandr Kolomiyets, Steven Bethard, and Marie- Francine Moens. 2011. Model-portability experi- ments for textual temporal analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 271- 276. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097-1105.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1378--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natu- ral language processing. In International conference on machine learning, pages 1378-1387.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Robust training under linguistic adversity",
"authors": [
{
"first": "Yitong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yitong Li, Trevor Cohn, and Timothy Baldwin. 2017. Robust training under linguistic adversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 21-27.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "How effective is task-agnostic data augmentation for pretrained transformers?",
"authors": [
{
"first": "Shayne",
"middle": [],
"last": "Longpre",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dubois",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "4401--4411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shayne Longpre, Yu Wang, and Chris DuBois. 2020. How effective is task-agnostic data augmentation for pretrained transformers? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 4401-4411.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Atalaya at tass 2018: Sentiment analysis with tweet embeddings and data augmentation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Franco",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"Manuel"
],
"last": "Luque",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2018,
"venue": "TASS@ SEPLN",
"volume": "",
"issue": "",
"pages": "29--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franco M Luque and Juan Manuel P\u00e9rez. 2018. Ata- laya at tass 2018: Sentiment analysis with tweet embeddings and data augmentation. In TASS@ SEPLN, pages 29-35.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "881--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881-893.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving short text classification through global augmentation methods",
"authors": [
{
"first": "Vukosi",
"middle": [],
"last": "Marivate",
"suffix": ""
},
{
"first": "Tshephisho",
"middle": [],
"last": "Sefara",
"suffix": ""
}
],
"year": 2020,
"venue": "International Cross-Domain Conference for Machine Learning and Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "385--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vukosi Marivate and Tshephisho Sefara. 2020. Im- proving short text classification through global aug- mentation methods. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction, pages 385-399. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "WordNet: An electronic lexical database",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1998. WordNet: An electronic lexical database. MIT press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Siamese recurrent architectures for learning sentence similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence simi- larity. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "An analysis of ontology-based query expansion strategies",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2003,
"venue": "International Workshop & Tutorial on Adaptive Text Extraction and Mining held in conjunction with the 14th European Conference on Machine Learning and the 7th European Conference on Principles and Practice of",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Paola Velardi. 2003. An anal- ysis of ontology-based query expansion strategies. In International Workshop & Tutorial on Adaptive Text Extraction and Mining held in conjunction with the 14th European Conference on Machine Learning and the 7th European Conference on Principles and Practice of, page 42.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning to compose domain-specific transformations for data augmentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3236--3246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander J Ratner, Henry Ehrenberg, Zeshan Hus- sain, Jared Dunnmon, and Christopher R\u00e9. 2017. Learning to compose domain-specific transforma- tions for data augmentation. In Advances in neural information processing systems, pages 3236-3246.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A survey on image data augmentation for deep learning",
"authors": [
{
"first": "Connor",
"middle": [],
"last": "Shorten",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Taghi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khoshgoftaar",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Big Data",
"volume": "6",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Connor Shorten and Taghi M Khoshgoftaar. 2019. A survey on image data augmentation for deep learn- ing. Journal of Big Data, 6(1):60.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Text data augmentation for deep learning",
"authors": [
{
"first": "Connor",
"middle": [],
"last": "Shorten",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Taghi",
"suffix": ""
},
{
"first": "Borko",
"middle": [],
"last": "Khoshgoftaar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Furht",
"suffix": ""
}
],
"year": 2021,
"venue": "Journal of Big Data",
"volume": "8",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Connor Shorten, Taghi M Khoshgoftaar, and Borko Furht. 2021. Text data augmentation for deep learn- ing. Journal of Big Data, 8(1):1-34.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "DeepStance at SemEval-2016 task 6: Detecting stance in tweets using character and word-level CNNs",
"authors": [
{
"first": "Prashanth",
"middle": [],
"last": "Vijayaraghavan",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Sysoev",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "413--419",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1067"
]
},
"num": null,
"urls": [],
"raw_text": "Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi, and Deb Roy. 2016. DeepStance at SemEval-2016 task 6: Detecting stance in tweets us- ing character and word-level CNNs. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 413-419, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Tweet2Vec: Learning tweet embeddings using character-level CNN-LSTM encoder-decoder",
"authors": [
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Prashanth",
"middle": [],
"last": "Vijayaraghavan",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1041--1044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soroush Vosoughi, Prashanth Vijayaraghavan, and Deb Roy. 2016. Tweet2Vec: Learning tweet embeddings using character-level CNN-LSTM encoder-decoder. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 1041-1044. ACM.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang and Diyi Yang. 2015. That's so an- noying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic cat- egorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2557--2563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Empirical Methods in Natural Language Processing, pages 2557-2563.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "SwitchOut: an efficient data augmentation algorithm for neural machine translation",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "856--861",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neu- big. 2018. SwitchOut: an efficient data augmenta- tion algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 856-861.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "6383--6389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Wei and Kai Zou. 2019. EDA: Easy data aug- mentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6383-6389.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bow- man. 2018. A broad-coverage challenge cor- pus for sentence understanding through infer- ence. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112- 1122.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Conditional BERT contextual augmentation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shangwen",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Liangjun",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Jizhong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Songlin",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Computational Science",
"volume": "",
"issue": "",
"pages": "84--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contex- tual augmentation. In International Conference on Computational Science, pages 84-95. Springer.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Data noising as smoothing in neural network language models",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Sida",
"middle": [
"I"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "L\u00e9vy",
"suffix": ""
},
{
"first": "Aiming",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziang Xie, Sida I. Wang, Jiwei Li, Daniel L\u00e9vy, Aim- ing Nie, Dan Jurafsky, and Andrew Y. Ng. 2017. Data noising as smoothing in neural network lan- guage models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Improved relation classification by deep recurrent neural networks with data augmentation",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yangyang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1461--1470",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved re- lation classification by deep recurrent neural net- works with data augmentation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1461-1470.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Fast and accurate reading comprehension by combining self-attention and convolution",
"authors": [
{
"first": "Adams",
"middle": [
"Wei"
],
"last": "Yu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dohan",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In International Conference on Learning Representations.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Learning from LDA using deep neural networks",
"authors": [
{
"first": "Dongxu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Understanding and Intelligent Applications",
"volume": "",
"issue": "",
"pages": "657--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongxu Zhang, Tianyi Luo, and Dong Wang. 2016. Learning from LDA using deep neural networks. In Natural Language Understanding and Intelligent Applications, pages 657-664. Springer.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Word embedding perturbation for sentence classification",
"authors": [
{
"first": "Dongxu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhichao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08166"
]
},
"num": null,
"urls": [],
"raw_text": "Dongxu Zhang and Zhichao Yang. 2018. Word embed- ding perturbation for sentence classification. arXiv preprint arXiv:1804.08166.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Integrating semantic knowledge to tackle zero-shot text classification",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Piyawat",
"middle": [],
"last": "Lertvittayakumjorn",
"suffix": ""
},
{
"first": "Yike",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1031--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Piyawat Lertvittayakumjorn, and Yike Guo. 2019. Integrating semantic knowl- edge to tackle zero-shot text classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1031-1040.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information processing systems, page 649-657.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "e jk \u223c U(\u2212m i , m i ) . (2) Once again, e jk (0 \u2264 k < d) are the elements of noise vector e j uniformly distributed over the halfopen interval [\u2212m i , m i ] and d is the dimension of the embeddings.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"text": "\u00b10.69 88.11 \u00b10.41 88.76 \u00b10.46 90.49 \u00b10.49 88.64 \u00b10.47 88.60 \u00b10.68 89.56 \u00b10.40 90.91 \u00b10.43 +0.42 \u00b10.00 +0.49 \u00b10.00 +0.80 \u00b10.00 +0.42 \u00b10.00",
"content": "<table><tr><td>.</td><td/><td colspan=\"2\">ACCURACY [%]</td><td/></tr><tr><td/><td>N = 1500</td><td>2000</td><td>3000</td><td>FULL</td></tr><tr><td>NO AUGM SCHEDULE DIFFERENCE</td><td colspan=\"2\">88.22 (a) SST-2 test dataset.</td><td/><td/></tr><tr><td>.</td><td/><td colspan=\"2\">MIS-MATCHED ACCURACY [%]</td><td/></tr><tr><td/><td>N = 1500</td><td>2000</td><td>3000</td><td>FULL</td></tr><tr><td>NO AUGM</td><td>65.73 \u00b11.12</td><td>67.60 \u00b12.21</td><td>69.67 \u00b10.79</td><td>74.26 \u00b10.35</td></tr><tr><td>SCHEDULE</td><td>64.78 \u00b11.20</td><td>66.21 \u00b10.77</td><td>68.71 \u00b10.49</td><td>73.95 \u00b10.47</td></tr><tr><td>DIFFERENCE</td><td>-0.95 \u00b10.00</td><td>-1.39 \u00b10.00</td><td>-0.96 \u00b10.00</td><td>-0.31 \u00b10.00</td></tr><tr><td/><td colspan=\"2\">(b) MNLI test dataset.</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Performance on SST-2 and MNLI. The model is trained 10 times independently either without augmentation or with the augmentation schedule yielded by the search. Since a single search is conducted per value of N , the reported standard deviation measures the robustness of the training procedure for a single schedule. (For more details about the robustness of the augmentation search, see Section 6.1.)",
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"text": "\u00b10.47 88.64 \u00b10.62 88.76 \u00b10.59",
"content": "<table><tr><td>TRAIN/VAL</td><td>250/1250</td><td>750/750</td><td>1250/250</td></tr><tr><td>ACCURACY</td><td>88.64</td><td/><td/></tr><tr><td>can be attributed to the ratio</td><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "Overview of the data augmentation transforms from our search space that operate directly on the input. This shows the outcome when the transformations are applied on samples from the SST-2 dataset with three different magnitude levels m = 0, 0.5, 1. Where relevant, tokens that have been replaced are highlighted in bold. In addition, newly inserted tokens are italicized whereas tokens that have changed places are underlined.",
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "The intermediate translation languages for each magnitude level m i . They are separated according to inverse BLEU3 scores.",
"content": "<table/>",
"type_str": "table"
},
"TABREF7": {
"num": null,
"html": null,
"text": "Implementation details of both the child models and the final model (both with and without augmentation). The model used is the uncased BERT base model with a classification layer.",
"content": "<table><tr><td/><td/><td colspan=\"4\">6WDFNHGSURGXFWRISUREDELOLW\\DQGPDJQLWXGHWKURXJKHSRFKV 6671</td></tr><tr><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td/><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td/></tr><tr><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td></tr><tr><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td></tr><tr><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td></tr><tr><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td><td>3UREDELOLW\\ \u00d70DJQLWXGH</td><td>6HDUFK</td><td>(SRFKV</td></tr><tr><td/><td/><td>(SRFKV</td><td/><td/><td>(SRFKV</td></tr><tr><td/><td colspan=\"2\">5DQGRP6ZDS</td><td colspan=\"2\">6\\QRQ\\P5HSODFHPHQW</td><td>%DFNWUDQVODWLRQ</td></tr><tr><td/><td colspan=\"2\">5DQGRP'HOHWLRQ</td><td colspan=\"2\">+\\SHUQ\\P5HSODFHPHQW</td><td>(PE1RUPDO1RLVH</td></tr><tr><td/><td colspan=\"2\">5DQGRP,QVHUWLRQ6\\Q</td><td colspan=\"2\">1HDUHVW1HLJKERXU</td><td>(PE8QLIRUP1RLVH</td></tr><tr><td/><td colspan=\"2\">5DQGRP,QVHUWLRQ6\\Q</td><td colspan=\"2\">&amp;RQWH[WXDO$XJPHQWDWLRQ</td><td/></tr><tr><td/><td/><td/><td>101</td><td/><td/></tr></table>",
"type_str": "table"
}
}
}
}