{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:33.375427Z" }, "title": "Multi-Layer Random Perturbation Training for Improving Model Generalization", "authors": [ { "first": "Kanashiro", "middle": [], "last": "Lis", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": {} }, "email": "kanashiro.pereira@ocha.ac.jp" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": {} }, "email": "" }, { "first": "Yuki", "middle": [], "last": "Taya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": {} }, "email": "" }, { "first": "Ichiro", "middle": [], "last": "Kobayashi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a simple yet effective Multi-Layer RAndom Perturbation Training algorithm (RAPT) to enhance model robustness and generalization. The key idea is to apply randomly sampled noise to each input to generate label-preserving artificial input points. To encourage the model to generate more diverse examples, the noise is added to a combination of the model layers. Then, our model regularizes the posterior difference between clean and noisy inputs. We apply RAPT towards robust and efficient BERT training, and conduct comprehensive fine-tuning experiments on GLUE tasks. Our results show that RAPT outperforms the standard fine-tuning approach, and adversarial training method, yet with 22% less training time.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We propose a simple yet effective Multi-Layer RAndom Perturbation Training algorithm (RAPT) to enhance model robustness and generalization. The key idea is to apply randomly sampled noise to each input to generate label-preserving artificial input points. To encourage the model to generate more diverse examples, the noise is added to a combination of the model layers. Then, our model regularizes the posterior difference between clean and noisy inputs. We apply RAPT towards robust and efficient BERT training, and conduct comprehensive fine-tuning experiments on GLUE tasks. Our results show that RAPT outperforms the standard fine-tuning approach, and adversarial training method, yet with 22% less training time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Although deep learning models have been very successful in various kinds of NLP problems, they are known to be sensitive towards input data distribution change which are pervasive across language tasks. Motivated by this, a recent line of work investigate the adversarial training technique to enhance the model robustness. Adversarial training has proven effective in improving model generalization and robustness in computer vision (Madry et al., 2017; Goodfellow et al., 2014) and natural language processing (NLP) (Zhu et al., 2019; Liu et al., 2020a; Pereira et al., , 2021 . It works by augmenting the input with a small perturbation to steer the current model prediction away from the correct label, thus forcing subsequent training to make the model more robust and generalizable. In NLP, the cutting-edge research in adversarial training tends to perform an inner search for the most adversarial direction using gradient steps (Zhu et al., 2019; Liu et al., 2020a; et , 2021 . This causes a significant overhead in training time. Moreover, such methods tend to add the adversarial perturbation only to the embedding layer, which might not be optimal.", "cite_spans": [ { "start": 434, "end": 454, "text": "(Madry et al., 2017;", "ref_id": "BIBREF18" }, { "start": 455, "end": 479, "text": "Goodfellow et al., 2014)", "ref_id": "BIBREF9" }, { "start": 518, "end": 536, "text": "(Zhu et al., 2019;", "ref_id": "BIBREF27" }, { "start": 537, "end": 555, "text": "Liu et al., 2020a;", "ref_id": "BIBREF15" }, { "start": 556, "end": 578, "text": "Pereira et al., , 2021", "ref_id": "BIBREF21" }, { "start": 936, "end": 954, "text": "(Zhu et al., 2019;", "ref_id": "BIBREF27" }, { "start": 955, "end": 973, "text": "Liu et al., 2020a;", "ref_id": "BIBREF15" }, { "start": 977, "end": 983, "text": ", 2021", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By contrast, in this paper, we investigate a simpler direction by using only randomly sampled noise to generate label-preserving artificial input points. We thus propose a simple yet effective RAndom Perturbation Training algorithm (RAPT) for enhancing model robustness and generalization. For each instance, instead of using gradient steps to generate adversarial examples, RAPT adds randomly sampled noise to the hidden representaions of a randomly chosen layer, among multiple intermediate transformer layers (i.e. BERT layers). We hypothesize this might encourage the model to generate more diverse examples, and improve model generalization capability. Our model then regularizes the model posterior difference between clean and noisy inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the overall GLUE benchmark, RAPT outperforms the standard fine-tuning approach, and matches or improves the performance of strong adversarial training methods such as SMART ), yet with a significantly reduced training time. Figure 1 shows the accuracy gain and training time drop of RAPT compared to SMART on the MNLI (matched) development dataset.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we focus on fine-tuning BERT models (Devlin et al., 2019) , as this approach has proven very effective for a wide range of NLP tasks.", "cite_spans": [ { "start": 51, "end": 72, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "The standard training algorithm seeks to learn a function f (x; \u03b8) : x \u2192 C as parametrized by \u03b8, where C is the class label set. Given a training dataset D of input-output pairs (x, y) and the loss function l(., .) (e.g. cross entropy), the standard training objective would minimize the empirical risk:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "min", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "\u03b8 E (x,y)\u223cD [l(f (x; \u03b8), y)].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "By contrast, in adversarial training, as pioneered in computer vision (Goodfellow et al., 2014; Hsieh et al., 2019; Madry et al., 2017; Jin et al., 2019) , the input would be augmented with a small perturbation that maximize the adversarial loss:", "cite_spans": [ { "start": 70, "end": 95, "text": "(Goodfellow et al., 2014;", "ref_id": "BIBREF9" }, { "start": 96, "end": 115, "text": "Hsieh et al., 2019;", "ref_id": "BIBREF10" }, { "start": 116, "end": 135, "text": "Madry et al., 2017;", "ref_id": "BIBREF18" }, { "start": 136, "end": 153, "text": "Jin et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "min \u03b8 E (x,y)\u223cD [max \u03b4 l(f (x + \u03b4; \u03b8), y)],", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "where the inner maximization can be solved by projected gradient descent (Madry et al., 2017) .", "cite_spans": [ { "start": 73, "end": 93, "text": "(Madry et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "Recently, adversarial training has been successfully applied to NLP as well (Zhu et al., 2019; . In particular, SMART regularizes the standard training objective using virtual adversarial training (Miyato et al., 2018) , by performing an inner loop to search for the most adversarial direction:", "cite_spans": [ { "start": 76, "end": 94, "text": "(Zhu et al., 2019;", "ref_id": "BIBREF27" }, { "start": 197, "end": 218, "text": "(Miyato et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min \u03b8 E (x,y)\u223cD [l(f (x; \u03b8), y)+ \u03b1 max \u03b4 l(f (x + \u03b4; \u03b8), f (x; \u03b8))]", "eq_num": "(1)" } ], "section": "RAPT", "sec_num": "2" }, { "text": "Effectively, the adversarial term encourages smoothness in the input neighborhood, and \u03b1 is a hyperparameter that controls the trade-off between standard errors and adversarial errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "Current adversarial methods for NLP are slower than standard training, due to the inner maximization. SMART, for instance, requires an additional K projected gradient steps to find the perturbation that maximizes the adversarial loss (violation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "Algorithm 1 RAPT Input: N :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "the number of training epochs, D, {(x1, y1), ..., (xn, yn)}: the dataset, X : the minibatch of the dataset, f (x; \u03b8): the machine learning model parametrized by \u03b8, \u03c3 2 : the variance of the random noise \u03b4, \u03b7: the number to controll the size of the noise, L: the number of transformer based model's layers, f layer : the function that computes the hidden representations of a given layer, h: the hidden representations of a layer of the model, \u03b4r: the noise added to the hidden states of layer r , \u03c4 : the global learning rate, \u03b1: the hyperparameter for balancing the standard loss and the regularization term, max_layer: the number of the maximum layer where the noise can be added during training. 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "for epoch = 1, 2, .., N do 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "for X \u2208 D do 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "Generate a random integer r \u2208 {1, ...,max_layer} 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "for (x, y) \u2208 X do 5: \u03b4 \u223c N (0, \u03c3 2 I) 6: \u03b4 \u2190 \u03b7 \u03b4 \u03b4 \u221e 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": ": // x : forward pass to the last layer of the model 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "for layer = 1, 2, ..., L do 9: h \u2190 f layer (h) 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "if layer is r then 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "h \u2190 h + \u03b4rI 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "end if 13: end for 14:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "g \u03b8 \u2190 \u2207 \u03b8 l(f (x; \u03b8), y) 15: +\u03b1\u2207 \u03b8 l(f (x + \u03b4r; \u03b8), f (x; \u03b8)) 16: \u03b8 \u2190 \u03b8 \u2212 \u03c4 g \u03b8 17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "end for 18: end for 19: end for Output: \u03b8 of local smoothness) (Liu et al., 2020a) . In practice, K = 1 suffices SMART, and it is roughly 2 times slower compared to standard training. By contrast, RAPT completely removes the adversarial steps that use gradient steps from SMART and instead optimizes for stabilizing the model local smoothness using only randomly sampled noise for regularization:", "cite_spans": [ { "start": 63, "end": 82, "text": "(Liu et al., 2020a)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "min \u03b8 E (x,y)\u223cD [l(f (x; \u03b8), y)+ \u03b1l(f (x + \u03b4 r ; \u03b8), f (x; \u03b8))]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "( 2)RAPT does not require extra backward computations and empirically works as well as or better than SMART. We consider the posterior regularization using the KL-divergence. For all tasks in this work, an input text sequence is divided into subword units w t , t = 1, . . . , T . The tokenized input sequence is then transformed into embeddings, x 1 , . . . , x T \u2208 R n , through a token encoder, which combines a token embedding, a (token) position embedding and a segment embedding (i.e. which text span the token belongs to) by elementwise summation. The embedding layer is used as the input to multiple transformer layers (Vaswani et al., 2017) to generate the contextual representations, h layer 1 , . . . , h layer T \u2208 R d , which are the hidden states of an intermediate layer of the BERT model.", "cite_spans": [ { "start": 627, "end": 649, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "In RAPT, we sample noise vectors \u03b4 1 , . . . , \u03b4 T from N (0, \u03c3 2 I), with mean 0 and variation of \u03c3 2 . We first set a maximum layer (among all BERT intermediate layers) where the noise vector can be added. In each epoch, for each mini-batch selected, a layer among the first layer and the maximum layer previously set is randomly chosen. The noise input is then constructed by adding the noise vector \u03b4 r (Equation 2) to the hidden state vector of the ramdomly chosen layer (h layer ). Specifically, the model first performs a forward pass up to the chosen layer, then the noise vector is added to its hidden states, i.e. h layer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "1 +\u03b7\u03b4 r1 , . . . , h layer T +\u03b7\u03b4 rT .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "The model is then updated according to the taskspecific objective for the task. To preserve the semantics, we constrain the noise to be small, and assume the model's prediction should not change after adding the perturbation. The algorithm of RAPT is shown in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RAPT", "sec_num": "2" }, { "text": "We evaluate our model on the GLUE 1 benchmark, a collection of nine natural language understanding (NLU) tasks. It includes question answering (Rajpurkar et al., 2016) , linguistic acceptability (Warstadt et al., 2018) , sentiment analysis (Socher et al., 2013 ), text similarity (Cer et al., 2017) , paraphrase detection (Dolan and Brockett, 2005) , and natural language inference (NLI) Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009; Levesque et al., 2012; Williams et al., 2018) . The diversity of the tasks makes GLUE very suitable for evaluating the generalization and robustness of NLU models. The GLUE tasks used in our experiments are summarized in Table 1 .", "cite_spans": [ { "start": 143, "end": 167, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF22" }, { "start": 195, "end": 218, "text": "(Warstadt et al., 2018)", "ref_id": "BIBREF25" }, { "start": 240, "end": 260, "text": "(Socher et al., 2013", "ref_id": "BIBREF23" }, { "start": 280, "end": 298, "text": "(Cer et al., 2017)", "ref_id": "BIBREF2" }, { "start": 322, "end": 348, "text": "(Dolan and Brockett, 2005)", "ref_id": "BIBREF7" }, { "start": 388, "end": 410, "text": "Bar-Haim et al., 2006;", "ref_id": "BIBREF0" }, { "start": 411, "end": 436, "text": "Giampiccolo et al., 2007;", "ref_id": "BIBREF8" }, { "start": 437, "end": 461, "text": "Bentivogli et al., 2009;", "ref_id": "BIBREF1" }, { "start": 462, "end": 484, "text": "Levesque et al., 2012;", "ref_id": "BIBREF14" }, { "start": 485, "end": 507, "text": "Williams et al., 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 683, "end": 690, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Our model implementation is based on the MT-DNN 2 framework (Liu et al., , 2020b . We use BERT BASE (Devlin et al., 2019) as the text encoder. We used ADAM (Kingma and Ba, 2015) as our optimizer with a learning rate in the range \u2208 {1 \u00d7 10 \u22125 , 2 \u00d7 10 \u22125 , 8 \u00d7 10 \u22126 } and a batch size \u2208 {16, 32}. The maximum number of epochs was set to 6. A linear learning rate decay schedule with warm-up over 0.1 was used, unless stated otherwise. To avoid gradient exploding, we clipped the gradient norm within 1. All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens. For SMART , we follow and set the perturbation size to 1 \u00d7 10 \u22125 . We choose the step size from {1 \u00d7 10 \u22123 , 1 \u00d7 10 \u22122 , 1 \u00d7 10 \u22121 , 1, 1.5, 2, 2.5, 3}. We set the variance for initializing the perturbation to 1 \u00d7 10 \u22125 . The \u03b1 parameter (Equation 1 and Equation 2) were both set to 1. During RAPT, we select \u03b7 from {0.01, 1.5, 2, 2.3, 2.5}. We found that adding the noise to the layers 1 to 3 worked best in our experiments, therefore, the max_layer parameter in Algorithm 1 was set to 3. For more details, please refer to Section 4.", "cite_spans": [ { "start": 60, "end": 80, "text": "(Liu et al., , 2020b", "ref_id": null }, { "start": 100, "end": 121, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.2" }, { "text": "We apply RAPT to BERT BASE and evaluate its performance on GLUE. Our results are shown in Table 2 . We compare RAPT with the standard fine-tuning approach (Standard) and with the adversarial training method SMART. For our model RAPT, we compare a model that adds the noise to the embedding layer only, RAPT (Embedding), with the model that adds the noise to the other layers. We report the model that uses \u03b7 = 2, RAPT (\u03b7 = 2), and the model that selects the best \u03b7 value, RAPT (BEST \u03b7).", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 98, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Main Results", "sec_num": "3.3" }, { "text": "Overall, we observed that SMART and RAPT were able to outperform standard fine-tuning, without using any additional knowledge source, and without using any additional dataset other than the target task datasets. These results suggest that adding noisy input points during training lead to a more robust model and help generalize better on unseen data. RAPT consistently outperforms standard training (with an average score of 81.9% vs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "3.3" }, { "text": "Remarkably, RAPT outperforms SMART on most GLUE tasks, and obtains the highest average score among all tasks on both dev and test sets (85.1% and 81.9%, respectively), yet with a smaller training time (86.7 minutes on average, as shown in Table 3 ). This indicates that using only randomly sampled noise leads to better results. We also observe pronounced gains of RAPT on the smaller datasets such as RTE (with a test set accuracy of Table 2 : Comparison of standard fine-tuning (Standard), adversarial training (SMART) and our methods (RAPT) on GLUE. We use the BERT BASE model as the text encoder for all models. For a fair comparison, all these results are produced by ourselves.Theses scores are the average of each model across 5 random seeds. The GLUE test results are scored using the GLUE evaluation server.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 246, "text": "Table 3", "ref_id": null }, { "start": 435, "end": 442, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "81.1% on the test set).", "sec_num": null }, { "text": "68.4) and STS-B (with a test set Pearson/Spearman correlation scores of 87.5/86.4), which illustrates the benefits of RAPT on improving model generalizability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "81.1% on the test set).", "sec_num": null }, { "text": "Adding the perturbation to multiple intermediate layers leads to better results on the dev set than adding the random perturbation to the embedding layer only (average score of 85.1% vs. 84.9%, respectively). However, on the test set, adding the random perturbation to the embedding layer only leads to slightly better results than adding the random perturbation to multiple intermediate layers (81.9% vs. 81.8%, respectively) . Still, both settings outperform SMART and standard fine-tuning.", "cite_spans": [ { "start": 395, "end": 426, "text": "(81.9% vs. 81.8%, respectively)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "81.1% on the test set).", "sec_num": null }, { "text": "Regarding the training time, on the overall GLUE benchmark, RAPT takes on average 22% less time to train compared to SMART (86.7 vs 110.9 minutes, respectively), as shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 181, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "81.1% on the test set).", "sec_num": null }, { "text": "Here, we take a closer look at the embeddings and hidden representations of the intermediate layers of BERT using standard fine-tuning (Standard), SMART, and RAPT. After training, we extract the embeddings (for Standard and SMART) and the intermediate layer representations (where noise has been added) of the second intermediate layer (for RAPT) of a sentence. Then, we compute the top most similar words for each word in the sentence. We use the cosine similarity for computing the similarity between the vectors. As shown in Table 3 : Training time comparison between standard fine-tuning (Standard), SMART, and RAPT. RAPT/SMART denotes the ratio between the training time of RAPT and SMART. We used a A100-PCIE-40GB GPU to measure the training time.", "cite_spans": [], "ref_spans": [ { "start": 528, "end": 535, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "Input: [CLS] by law , mexico can only export half the oil it produces to the united states .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "[SEP] mexico produces more oil than any other country . Table 4 : Top-10 closest words to the vector of the word country using standard fine-tuning (Standard), SMART, and RAPT on the RTE development dataset. The words in red are the words not shared with the Standard fine-tuning method. RAPT has five different words, while SMART has only two.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "Although having more diversity, the hidden representations are kept meaningful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "To further illustrate the diversity RAPT introduces to the model's hidden representations, Table 5 and Table 6 compare the embeddings produced by SMART before and after adding the perturbation and the hidden reporesentations produced by RAPT before and after adding the noise. For SMART, we can observe that the top-10 similar words to the word country do not change after adding the perturbation, and the cosine similarity scores are kept around the same. In our experiments, increasing the perturbation size leads to a drop in accuracy. For RAPT, we extract the hidden representations from the second layer of BERT. As we can see, the cosine similarity scores change and more diversity is introduced. In our development experiments, the accuracy instead increases, indicating that the hidden representations from the intermediate layers might be less sensitive to noise compared to the embedding layer.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Table 5", "ref_id": null }, { "start": 104, "end": 111, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "Regarding which layer combination setting is best for adding the noise, we found that adding the noise to the layers 1 to 3 worked best in our experiments, as shown in Figure 2 and Figure 3 , for the MNLI and MRPC datasets. On the x-axis, max_layer = embed denotes that the noise is added to the embedding layer only. All the other values denote that, for each mini-batch, a layer among the layer 1 up to this layer value is randomly chosen. The model is then updated according to the task-specific objective for the task. On the x-axis, max_layer = embed denotes that the noise is added to the embedding layer only. All the other values denote that, for each mini-batch, a layer among the layer 1 up to this layer value is randomly chosen. The model is then updated according to the task-specific objective for the task.", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 176, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 181, "end": 189, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "Input: [CLS] by law , mexico can only export half the oil it produces to the united states . [SEP] mexico produces more oil than any other country . [ Table 5 : Top-10 closest words to the vector of the word country using SMART and RAPT on the RTE development dataset. SMART (the leftmost column) denotes the embedding of the target vector (country) before adding the perturbation using SMART. SMART (+noise) denotes the embedding of the target vector (country) after adding the perturbation using SMART. In both cases, we compute the similarity between the embedding of the target word and the embeddings of the other words in the training set. RAPT (the second column from the right) denotes the hidden state vector of the second layer of BERT of the target vector (country) before adding the noise using RAPT. RAPT (+noise) denotes the hidden state vector of the second layer of BERT of the target vector (country) after adding the noise using RAPT. In both cases, we compute the similarity between the hidden state of the target word and the hidden states of the other words in the training set. Table 6 : Top-10 closest words to the vector of the word can using SMART and RAPT on the RTE development dataset. SMART (the leftmost column) denotes the embedding of the target vector (can) before adding the perturbation using SMART. SMART (+noise) denotes the embedding of the target vector (can) after adding the perturbation using SMART. In both cases, we compute the similarity between the embedding of the target word and the embeddings of the other words in the training set. RAPT (the second column from the right) denotes the hidden state vector of the second layer of BERT of the target vector (can) before adding the noise using RAPT. RAPT (+noise) denotes the hidden state vector of the second layer of BERT of the target vector (can) after adding the noise using RAPT. In both cases, we compute the similarity between the hidden state of the target word and the hidden states of the other words in the training set.", "cite_spans": [ { "start": 93, "end": 98, "text": "[SEP]", "ref_id": null }, { "start": 149, "end": 150, "text": "[", "ref_id": null } ], "ref_spans": [ { "start": 151, "end": 158, "text": "Table 5", "ref_id": null }, { "start": 1100, "end": 1107, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4" }, { "text": "We proposed RAPT, a simple and efficient random perturbation training algorithm for fine-tuning large scale pre-trained language models. Our experiments demonstrated that it achieves competitive results on GLUE tasks, without relying on any additional resource other than the target task dataset. Moreover, our model can significanlty reduce the training time compared to adversarial training. RAPT is model-agnostic, and can also be generalized to solve other downstream tasks as well, and we will explore these directions as future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://gluebenchmark.com/ 2 https://github.com/namisan/mt-dnn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the reviewers for their helpful feedback. This work has been supported by the project KAK-ENHI ID: 21K17802.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The second PASCAL recognising textual entailment challenge", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL recognising textual entailment challenge. In Pro- ceedings of the Second PASCAL Challenges Work- shop on Recognising Textual Entailment.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The fifth pascal recognizing textual entailment challenge", "authors": [ { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Hoa", "middle": [ "Trang" ], "last": "Dang", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2009, "venue": "Proc Text Analysis Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC'09.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1708.00055" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Posterior differential regularization with f-divergence for improving model robustness", "authors": [ { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lis", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Yaoliang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.12638" ] }, "num": null, "urls": [], "raw_text": "Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2020. Posterior differential reg- ularization with f-divergence for improving model robustness. arXiv preprint arXiv:2010.12638.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Robust neural machine translation with doubly adversarial inputs", "authors": [ { "first": "Yong", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly ad- versarial inputs.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The pascal recognising textual entailment challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05", "volume": "", "issue": "", "pages": "177--190", "other_ids": { "DOI": [ "10.1007/11736790_9" ] }, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First Inter- national Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual En- tailment, MLCW'05, pages 177-190, Berlin, Hei- delberg. Springer-Verlag.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatically constructing a corpus of sentential paraphrases", "authors": [ { "first": "B", "middle": [], "last": "William", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The third PASCAL recognizing textual entailment challenge", "authors": [ { "first": "Danilo", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "J", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6572" ] }, "num": null, "urls": [], "raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the robustness of self-attentive models", "authors": [ { "first": "Yu-Lun", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Minhao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Da-Cheng", "middle": [], "last": "Juan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Wen-Lian", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1520--1529", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, and Cho-Jui Hsieh. 2019. On the robustness of self-attentive models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1520-1529.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Smart: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization", "authors": [ { "first": "Haoming", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tuo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03437" ] }, "num": null, "urls": [], "raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. Smart: Robust and efficient fine-tuning for pre- trained natural language models through princi- pled regularized optimization. arXiv preprint arXiv:1911.03437.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Is bert really robust? natural language attack on text classification and entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11932" ] }, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. arXiv preprint arXiv:1907.11932.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Adam: A method for stochastic optimization. ICLR (Poster)", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR (Poster) 2015.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The winograd schema challenge", "authors": [ { "first": "Hector", "middle": [], "last": "Levesque", "suffix": "" }, { "first": "Ernest", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Leora", "middle": [], "last": "Morgenstern", "suffix": "" } ], "year": 2012, "venue": "Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adversarial training for large neural language models", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.08994" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020a. Adversarial training for large neural lan- guage models. arXiv preprint arXiv:2004.08994.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-task deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4487-4496.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Guihong Cao, and Jianfeng Gao. 2020b. The microsoft toolkit of multitask deep neural networks for natural language understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Xueyun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Awa", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.07972" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020b. The microsoft toolkit of multi- task deep neural networks for natural language un- derstanding. arXiv preprint arXiv:2002.07972.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Towards deep learning models resistant to adversarial attacks", "authors": [ { "first": "Aleksander", "middle": [], "last": "Madry", "suffix": "" }, { "first": "Aleksandar", "middle": [], "last": "Makelov", "suffix": "" }, { "first": "Ludwig", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Tsipras", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Vladu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.06083" ] }, "num": null, "urls": [], "raw_text": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversar- ial attacks. arXiv preprint arXiv:1706.06083.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Virtual adversarial training: a regularization method for supervised and semisupervised learning", "authors": [ { "first": "Takeru", "middle": [], "last": "Miyato", "suffix": "" }, { "first": "Masanori", "middle": [], "last": "Shin-Ichi Maeda", "suffix": "" }, { "first": "Shin", "middle": [], "last": "Koyama", "suffix": "" }, { "first": "", "middle": [], "last": "Ishii", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on pattern analysis and machine intelligence", "volume": "41", "issue": "", "pages": "1979--1993", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pat- tern analysis and machine intelligence, 41(8):1979- 1993.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Adversarial training for commonsense inference", "authors": [ { "first": "Lis", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Masayuki", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Kobayashi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.08156" ] }, "num": null, "urls": [], "raw_text": "Lis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asa- hara, and Ichiro Kobayashi. 2020. Adversarial train- ing for commonsense inference. arXiv preprint arXiv:2005.08156.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Targeted adversarial training for natural language understanding", "authors": [ { "first": "Lis", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Kobayashi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "5385--5393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understand- ing. Proceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 5385-5393 June 6-11, 2021.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": { "DOI": [ "10.18653/v1/D16-1264" ] }, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Neural network acceptability judgments", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.12471" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Freelb: Enhanced adversarial training for language understanding", "authors": [ { "first": "Chen", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Goldstein", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11764" ] }, "num": null, "urls": [], "raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: En- hanced adversarial training for language understand- ing. arXiv preprint arXiv:1909.11764.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Acurracy and training time comparison between adversarial training (SMART) and RAPT on the MNLI (matched) development dataset. RAPT obains an accuracy gain of 0.6% yet with a training time drop of 24%." }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Performance on the MNLI development set as we change the layer combination to add the noise." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "Performance on the MRPC development set as we change the layer combination to add the noise." }, "TABREF1": { "num": null, "html": null, "type_str": "table", "text": "Summary information of the GLUE benchmark.", "content": "
MethodsMNLI-m/mm QQP RTE QNLI MRPC CoLA SST STS-B Average
AccAcc/F1 Acc Acc Acc/F1 Mcc Acc P/S Corr Score
Standard dev84.1/84.3 90.5/87.3 69.3 90.9 86.9/90.7 58.3 92.4 89.9/89.4 84.5
SMART dev84.7/85.2 90.9/87.9 70.8 91.4 86.4/90.5 58.6 92.8 90.2/89.7 84.9
RAPT dev (Embedding) 85.2/85.5 91.2/88.3 69.5 91.7 86.1/90.1 58.3 92.9 90.1/89.8 84.9
RAPT dev (\u03b7 = 2)85.2/85.5 91.2/88.2 70.6 91.8 86.9/90.8 58.7 92.8 89.9/89.6 85.1
RAPT dev (BEST \u03b7)85.3/85.6 91.2/88.2 70.6 91.8 86.9/90.8 58.7 92.8 90.1/89.7 85.1
Standard test84.2/83.2 88.6/70.6 67.9 90.2 84.0/88.3 52.1 93.1 86.3/85.0 81.1
SMART test85.0/84.2 89.1/71.8 68.4 90.7 83.6/88.1 52.8 93.5 86.9/85.6 81.6
RAPT test (Embedding) 85.3/84.4 89.1/71.8 68.4 91.1 84.1/88.2 53.0 93.7 87.5/86.4 81.9
RAPT test (\u03b7 = 2)85.4/84.7 89.1/71.9 67.4 91.1 84.0/88.3 51.1 93.8 87.2/86.1 81.7
RAPT test (BEST \u03b7)85.5/84.7 89.1/71.9 68.0 91.1 84.0/88.3 52.3 93.8 86.8/85.5 81.8
" }, "TABREF2": { "num": null, "html": null, "type_str": "table", "text": "adding the noise to the intermediate layers using RAPT introduces more diversity. For instance, among the top-10 closest words to the word country, SMART shares eight words with the Standard fine-tuning methods, while RAPT shares only five.", "content": "
Time (min)MNLI QQP RTE QNLI MRPC CoLA SST STS-B Average
Standard2201752.5402.54.817.53.869.6
SMART37027041024.1831.56110.9
RAPT2802203803.46.524.54.986.7
RAPT/SMART 0.760.81 0.75 0.780.810.79 0.770.80.78
" }, "TABREF5": { "num": null, "html": null, "type_str": "table", "text": "Input: [CLS] by law , mexico can only export half the oil it produces to the united states .[SEP] mexico produces more oil than any other country .", "content": "
[SEP]
SMARTSMART (+noise)RAPTRAPT (+noise)
word similarity word similarity word similarity word similarity
could0.5857could0.5855could0.5656could0.4580
may0.3869may0.3873cannot0.5162cannot0.4269
cannot0.3774cannot0.3776couldn0.5098couldn0.4213
might0.3736might0.3736allows0.3445allows0.2859
couldn0.3651couldn0.3650helps0.3408must0.2759
must0.3388must0.3388shall0.3218may0.2659
will0.3312will0.3314should0.3183doesn0.2639
should0.3119should0.3121must0.3130sees0.2531
would0.2914would0.2915doesn0.3106helps0.2388
shall0.2624shall0.2626may0.3083allow0.2318
" } } } }