{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:22.587163Z" }, "title": "Second-Order NLP Adversarial Examples", "authors": [ { "first": "John", "middle": [ "X" ], "last": "Morris", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "jm8wx@virginia.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Adversarial example generation methods in NLP rely on models like language models or sentence encoders to determine if potential adversarial examples are valid. In these methods, a valid adversarial example fools the model being attacked, and is determined to be semantically or syntactically valid by a second model. Research to date has counted all such examples as errors by the attacked model. We contend that these adversarial examples may not be flaws in the attacked model, but flaws in the model that determines validity. We term such invalid inputs second-order adversarial examples. We propose the constraint robustness curve, and associated metric ACCS, as tools for evaluating the robustness of a constraint to second-order adversarial examples. To generate this curve, we design an adversarial attack to run directly on the semantic similarity models. We test on two constraints, the Universal Sentence Encoder (USE) and BERTScore. Our findings indicate that such second-order examples exist, but are typically less common than first-order adversarial examples in stateof-the-art models. They also indicate that USE is effective as constraint on NLP adversarial examples, while BERTScore is nearly ineffectual. Code for running the experiments in this paper is available here.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Adversarial example generation methods in NLP rely on models like language models or sentence encoders to determine if potential adversarial examples are valid. In these methods, a valid adversarial example fools the model being attacked, and is determined to be semantically or syntactically valid by a second model. Research to date has counted all such examples as errors by the attacked model. We contend that these adversarial examples may not be flaws in the attacked model, but flaws in the model that determines validity. We term such invalid inputs second-order adversarial examples. We propose the constraint robustness curve, and associated metric ACCS, as tools for evaluating the robustness of a constraint to second-order adversarial examples. To generate this curve, we design an adversarial attack to run directly on the semantic similarity models. We test on two constraints, the Universal Sentence Encoder (USE) and BERTScore. Our findings indicate that such second-order examples exist, but are typically less common than first-order adversarial examples in stateof-the-art models. They also indicate that USE is effective as constraint on NLP adversarial examples, while BERTScore is nearly ineffectual. Code for running the experiments in this paper is available here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "If an imperceptible change to an input causes a model to make a misclassification, the perturbed input is known as an adversarial example (Goodfellow et al., 2014) . In domains with continuous inputs like audio and vision, whether such a change is considered \"imperceptible\" can be easily measured: A change to an image may be considered imperceptible (and thus a valid adversarial example) if the resulting image is no more than some fixed distance away in pixel space (Chakraborty et al., 2018) . Although the perturbation has different meaning than the original (and the entailment model correctly predicts a contradiction), the sentence encoding similarity does not reflect this change. Current NLP adversarial example generation methods would incorrectly consider this a flaw in the entailment model. We refer to the function that determines imperceptibility as the constraint, C. For input x and perturbation x adv , if C(x, x adv ) is true, x adv is a valid perturbation for x.", "cite_spans": [ { "start": 138, "end": 163, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF10" }, { "start": 470, "end": 496, "text": "(Chakraborty et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different domains call for different constraints. In vision, a common constraint is`1(x, x adv ), the maximum pixel-wise distance between image x and its perturbation x adv (Goodfellow et al., 2014) . In audio, a common constraint is |dB(x) dB(x adv )|, the distortion in decibels between audio input x and perturbation x adv (Carlini and Wagner, 2018) . Both constraints are easily computed, wellunderstood, and correlate with human perceptual distance.", "cite_spans": [ { "start": 173, "end": 198, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF10" }, { "start": 326, "end": 352, "text": "(Carlini and Wagner, 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Choosing the correct constraint is not always so straightforward. In discrete domains like language, there is no obvious choice. In fact, the field lacks consensus on even the meaning of \"imperceptibility\". Different adversarial attacks have used different definitions of imperceptibility (Zhang et al., 2020a) . One common definition (Alzantot et al., 2018; Jin et al., 2019; Ren et al., 2019; Garg and Ramakrishnan, 2020) is imperceptibility with respect to meaning: C(x, x adv ) is true if x adv retains the semantics of x.", "cite_spans": [ { "start": 289, "end": 310, "text": "(Zhang et al., 2020a)", "ref_id": "BIBREF32" }, { "start": 335, "end": 358, "text": "(Alzantot et al., 2018;", "ref_id": "BIBREF0" }, { "start": 359, "end": 376, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 377, "end": 394, "text": "Ren et al., 2019;", "ref_id": "BIBREF24" }, { "start": 395, "end": 423, "text": "Garg and Ramakrishnan, 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With this definition, a perturbation x adv is determined to be a valid adversarial example if it simultaneously fools the model and retains the semantics of x. This formulation is problematic because measuring semantic similarity is an open problem in NLP. As a consequence, many adversarial attacks use a second NLP model as a constraint, to determine whether or not x adv preserves the semantics of x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Just like the model under attack, the semantic similarity model is vulnerable to adversarial examples. So when this type of attack finds a valid adversarial example, it is unclear which model has made a mistake: was it the model being attacked, or the model used to enforce the constraint?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In other words, it is possible that the semantic similarity model improperly classified x adv as preserving the semantics of x. We refer to these flaws in constraints as second-order adversarial examples. Figure 1 shows a sample second-order adversarial example. Second-order adversarial examples have been largely ignored in the literature on NLP adversarial examples to date. Now that we are aware of the existence of second-order adversarial examples, we seek to minimize their impact. How can we measure a given constraint's susceptibility to second-order adversarial examples? We suggest one such measurement tool: the constraint robustness curve and its associated metric ACCS.", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 213, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We then develop an adversarial example generation technique for finding examples that fool these semantic similarity models. Our findings indicate that adversarial examples for these types of models exist, but are less likely than adversarial examples that fool other NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Along the way, we compare the Universal Sentence Encoder (USE) (Cer et al., 2018) , a sentence encoder commonly used as a constraint for NLP adversarial examples, with BERTScore , a metric that outperforms sentence encoders for evaluating text generation systems.", "cite_spans": [ { "start": 63, "end": 81, "text": "(Cer et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this work can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We formally define second-order adversarial examples, a previously unaddressed is-sue with the problem statement for semanticspreserving adversarial example generation in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We propose Adjusted Constraint C-Statistic (ACCS), the normalized area under the constraint robustness curve, as a measurement of the efficacy of a given model as a constraint on adversarial examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We run NLP adversarial attacks not on models fine-tuned for downstream tasks, but on semantic similarity models used to regulate the adversarial attack process. We show that they are [robust-not robust]. Across the board, USE achieves a much higher ACCS, indicating that USE is a more robust choice than BERTScore for constraining NLP adversarial perturbations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To create natural language adversarial examples that preserve semantics, past work has implemented the constraint using a model that measures semantic similarity (Garg and Ramakrishnan, 2020; Alzantot et al., 2018; Li et al., 2018; Jin et al., 2019) . For semantic similarity model S, original input x, and adversarial perturbation x adv , constraint C can be defined as defined as:", "cite_spans": [ { "start": 162, "end": 191, "text": "(Garg and Ramakrishnan, 2020;", "ref_id": "BIBREF9" }, { "start": 192, "end": 214, "text": "Alzantot et al., 2018;", "ref_id": "BIBREF0" }, { "start": 215, "end": 231, "text": "Li et al., 2018;", "ref_id": "BIBREF17" }, { "start": 232, "end": 249, "text": "Jin et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "C(x, x adv ) := S(x, x adv ) \u270f (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "where \u270f is a threshold that determines semantic similarity. If their semantic distance is higher than some threshold, the perturbation is considered a valid adversarial example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "Using such a constraint in an untargeted attack on classification model F , the attack goal G function can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G(x, x adv ) := (F (x) 6 = F (x adv ))^C(x, x adv ) := (F (x) 6 = F (x adv ))^(S(x, x adv \u270f)", "eq_num": "(2)" } ], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "Here, x adv is a valid adversarial example when both criteria of the goal are fulfilled: F produces a different class output for x adv than for x, and C(x, x adv ) is true. This type of joint goal function is common in NLP adversarial attacks (Zhang et al., 2020a) .", "cite_spans": [ { "start": 243, "end": 264, "text": "(Zhang et al., 2020a)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "It is possible that these constraints evaluate the semantic similarity of the original and perturbed text incorrectly. If the semantic similarity score is too low, then x adv will be rejected by the algorithm; if the score is too high, then the algorithm will consider x adv a valid adversarial example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "If S(x, x adv ) is too high, x adv is incorrectly considered a valid adversarial example: a flaw in model F . However, since semantics is not preserved from x to x adv , there is no reason to assume that F (x) should be consistent with F (x adv ). The flaw is actually in S, the semantic similarity model that erroneously considered x adv to be a valid adversarial example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "For adversarial attacks on model F using a constraint determined by model S, we suggest the following terminology:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "\u2022 First-order adversarial examples are perturbations that are correctly classified as imperceptible by S, and fool F .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "\u2022 Second-order adversarial examples fool S, the model used as a constraint. Regardless of the output of F , these are adversarial examples for S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "In the next section, we suggest a method for determining the vulnerability of S to second-order adversarial examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Second-order adversarial examples", "sec_num": "2" }, { "text": "In this section, we propose the constraint robustness curve, a method for analyzing the robustness, or susceptibility to second-order adversarial examples, of a given constraint. Each semantic similarity model may produce scores on a different scale, varying the best \u270f for preservation of semantics. As such, we cannot fairly compare two models at the same values of \u270f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint robustness curves and ACCS", "sec_num": "3" }, { "text": "However, the problem of comparing two binary classifiers that may have different threshold scales is common in machine learning (Hajian-Tilaki, 2013). Inspired by the receiver operating characteristic (ROC) curve for binary classifiers, we propose the constraint robustness curve, a plot of first-order vs. second-order adversarial examples as constraint sensitivity varies. To create the constraint robustness curve for semantic similarity model S and threshold \u270f, we plot the number of true positives (first-order adversarial examples, found using S as a constraint) vs. false positives (secondorder adversarial examples, found by attacking S directly). The constraint robustness curve can be interpreted similarly to an ROC curve. An effective constraint will allow many true positives (first-order adversarial examples) before many false positives (second-order adversarial examples). The model that produces a curve with a higher AUC (area under the constraint robustness curve) is better at distinguishing valid from invalid adversarial examples, and less susceptible to second-order adversarial examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint robustness curves and ACCS", "sec_num": "3" }, { "text": "When \u270f = 0, C(x, x adv ) is always true. But even when the constraint accepts all possible x adv , some attacks may still fail. So unlike a typical ROC curve, which is bounded between 0 and 1 on both axes, the constraint robustness curve is bounded on each axis between 0 and the maximum attack success rate (when \u270f = 0). We suggest normalizing to bound the score between 0 and 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint robustness curves and ACCS", "sec_num": "3" }, { "text": "We call the resulting metric Adjusted Constraint C-Statistic (ACCS) 1 . ACCS is defined as the area under the constraint robustness curve normalized by the maximum first-and second-order success rate. Figure 2 shows an example of a constraint robustness curve for a toy problem. (The area under the green dashed curve is 0.105; after normalizing by the maximum first-and secondorder attack success rates of 0.7 and 0.3, we find ACCS = 0.5.) There is one crucial difference between interpreting an ROC curve and a constraint robustness curve. A naive binary classifier will guess randomly and achieve as many false positives as true positives, and an AUC of 0.5. A naive constraint will yield all second-order adversarial examples at the same threshold, and garner an ACCS of 0.0.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Constraint robustness curves and ACCS", "sec_num": "3" }, { "text": "To create such a curve, we must devise methods for generating both first-order and second-order adversarial examples. In the following section, we propose an attack for each purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraint robustness curves and ACCS", "sec_num": "3" }, { "text": "To calculate ACCS(S, \u270f) for each S and \u270f, we design two attacks: one to calculate the number of first-order adversarial examples, and one to calculate the number of second-order adversarial examples. In Section 5, we run the attacks across a variety of models and datasets and examine their constraint robustness curves.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating first and second-order adversarial examples", "sec_num": "4" }, { "text": "To measure the number of first-order adversarial examples allotted by a semantic similarity model for a given value of \u270f, we can run any standard adversarial attack that uses the semantic similarity model as a constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating first-order adversarial examples", "sec_num": "4.1" }, { "text": "We devise a simple attack to generate adversarial examples for some classifier F . We choose untargeted classification, the goal of changing the classifier's output to any but the ground-truth output class, as the goal function. To generate perturbations, we swap words in x with their synonyms from WordNet (Miller, 1995) .", "cite_spans": [ { "start": 308, "end": 322, "text": "(Miller, 1995)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Generating first-order adversarial examples", "sec_num": "4.1" }, { "text": "Simply swapping words with synonyms from a thesaurus would frequently create ungrammatical perturbations (even though they may be semantically similar to the originals). To better preserve grammaticality, we enforce an additional constraint, requiring that the log-probability of any replaced word not decrease by more than some fixed amount, as according to the GPT-2 language model (Radford et al., 2019). (This is similar the language model perplexity constraints used in the NLP attacks of Alzantot et al. (2018) and (Kuleshov et al., 2018) .)", "cite_spans": [ { "start": 494, "end": 516, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF0" }, { "start": 521, "end": 544, "text": "(Kuleshov et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Generating first-order adversarial examples", "sec_num": "4.1" }, { "text": "As an additional constraint, the attack filters potential perturbations using the semantic similarity model to ensure that S(x, x adv ) \u270f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating first-order adversarial examples", "sec_num": "4.1" }, { "text": "Finally, we choose greedy with word importance ranking as our search method (Gao et al., 2018) . We can use these four components (goal function, transformation, constraints, and search method) to construct an adversarial attack to generate adversarial examples for any NLP classifier (Morris et al., 2020b).", "cite_spans": [ { "start": 76, "end": 94, "text": "(Gao et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Generating first-order adversarial examples", "sec_num": "4.1" }, { "text": "Generating adversarial examples for classification model F is a well-studied problem. But how do we generate perturbations that fool S, a semantic similarity model?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "We first note what these adversarial examples might look like. Our goal is to find 'false positives' where a semantic similarity model incorrectly indicates that semantics is preserved. Specifically, we want to find some (x, x adv ) where S(x, x adv ) \u270f, even though we know x adv does not preserve the semantics of x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "To generate such perturbations, we design a transformation with the goal of changing the meaning of an x as much as possible (instead of preserving its meaning). At each step of the adversarial attack, instead of replacing words with their synonyms, we replace words with their antonyms, also sourced from WordNet (Miller, 1995) .", "cite_spans": [ { "start": 314, "end": 328, "text": "(Miller, 1995)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "Next, we need to establish a goal function that perturbations must meet to be considered adversarial examples for a given semantic similarity metric. We establish the following goal function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G(x, x adv ) := (S(x, x adv ) \u270f)( ( X i x[i] 6 = x adv [i]) )", "eq_num": "(3)" } ], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "Here, x[i] represents the i th word in sequence x, and represents the minimum number of words that must be changed for the attack to succeed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "With our goal function, perturbation x adv is a valid adversarial example if it differs by at least words from x, but its semantic similarity to x is still higher than \u270f. If words are substituted with antonyms, as increases, we can say with high certainty that semantics is not preserved. In this case, the semantic similarity model should produce a value smaller than \u270f.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "As in 4.1, we apply a second constraint, using GPT-2 to ensure antonyms substituted are likely in their context. For the search method, we use beam search, as it does a better job finding adversarial examples when the set of valid perturbations is sparse (Ebrahimi et al., 2017) .", "cite_spans": [ { "start": 255, "end": 278, "text": "(Ebrahimi et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "A sample output of this attack (where = 2) is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Generating second-order adversarial examples", "sec_num": "4.2" }, { "text": "We implemented our adversarial attacks using the TextAttack adversarial attack framework (Morris et al., 2020b) . Figure 4 shows the attack prototypes of each attack, as constructed in TextAttack.", "cite_spans": [ { "start": 89, "end": 111, "text": "(Morris et al., 2020b)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 114, "end": 122, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Attack Prototypes", "sec_num": "5.1" }, { "text": "As noted in the previous section, each attack used the GPT-2 language model to preserve grammaticality during word replacements; we disallowed word replacements that decreased in logprobability from the original word 2.0 or more. The other constraints in the attack prototype disallow multiple modifications of the same word, stopword substitutions, and, in the case of entailment datasets, edits to the premise. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attack Prototypes", "sec_num": "5.1" }, { "text": "We tested two semantic similarity models as S:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic similarity models", "sec_num": "5.2" }, { "text": "\u2022 The Universal Sentence Encoder (USE) (Cer et al., 2018) , a model trained to encode sentences into fixed-length vectors. Semantic similarity between x and x adv is measured as the cosine similarity of their encodings. This is consistent with NLP attack literature (Li et al., 2018; Jin et al., 2019; Garg and Ramakrishnan, 2020 ).", "cite_spans": [ { "start": 39, "end": 57, "text": "(Cer et al., 2018)", "ref_id": "BIBREF4" }, { "start": 266, "end": 283, "text": "(Li et al., 2018;", "ref_id": "BIBREF17" }, { "start": 284, "end": 301, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 302, "end": 329, "text": "Garg and Ramakrishnan, 2020", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic similarity models", "sec_num": "5.2" }, { "text": "\u2022 BERTScore , an automatic evaluation metric for text generation. BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence using the contextual embedding of each token. According to human studies, BERTScore correlates better than other metrics (including sentence encodings) for evaluating machine translations. It also outperforms sentence encodings on PAWS (Yang et al., 2019) , an adversarial paraphrase dataset where inputs have a similar format to NLP adversarial examples.", "cite_spans": [ { "start": 423, "end": 442, "text": "(Yang et al., 2019)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic similarity models", "sec_num": "5.2" }, { "text": "To create constraint robustness curves, we ran each attack (first and second-order) while varying \u270f from 0.75 to 1.0 in increments of 0.01. For the SST-2 dataset, which has some very short examples, we varied \u270f from 0.5 to 1.0 in increments of 0.02. For the second-order attack, we fixed = 3. For our tests, we chose the following three datasets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Victim Classifiers", "sec_num": "5.3" }, { "text": "\u2022 The Stanford Natural Language Inference (SNLI) Corpus, which contains labeled sentence pairs for textual entailment (Bowman et al., 2015);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Victim Classifiers", "sec_num": "5.3" }, { "text": "\u2022 The Stanford Sentiment Treebank v2 (SST-2) Corpus (Socher et al., 2013) , a phrase-level sentiment classification dataset;", "cite_spans": [ { "start": 52, "end": 73, "text": "(Socher et al., 2013)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Victim Classifiers", "sec_num": "5.3" }, { "text": "\u2022 Rotten Tomatoes dataset 3 , a sentence-level sentiment classification dataset (Pang and Lee, 2005) .", "cite_spans": [ { "start": 80, "end": 100, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Victim Classifiers", "sec_num": "5.3" }, { "text": "For the first-order attack, we chose three target models fine-tuned on each dataset (total of nine models): BERT (Devlin et al., 2018) , ALBERT (Lan et al., 2019) , and DistilBERT (Sanh et al., 2019) . All models used were pre-trained models provided by TextAttack (Morris et al., 2020b) . More details about experimental setup are provided in A.1.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 144, "end": 162, "text": "(Lan et al., 2019)", "ref_id": "BIBREF16" }, { "start": 180, "end": 199, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF26" }, { "start": 254, "end": 287, "text": "TextAttack (Morris et al., 2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Victim Classifiers", "sec_num": "5.3" }, { "text": "We sampled 100 examples from each test set of dataset for each attack. We repeated each attack twice, once using BERTScore and once using the Universal Sentence Encoder. In total, we ran 300 attacks. Table 1 shows results for each model and dataset. Figure 5 shows the constraint robustness curve for each scenario. Surprisingly, the Universal Sentence Encoder achieved a higher ACCS than BERTScore across all nine scenarios. This appears contradictory to the claims of that \"BERTScore is more robust to challenging examples when compared to existing metrics\".", "cite_spans": [], "ref_spans": [ { "start": 200, "end": 207, "text": "Table 1", "ref_id": null }, { "start": 250, "end": 258, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "Additionally, at any given point, first-order adversarial examples are found over twice as often as second-order adversarial examples. This indicates that most adversarial examples found in NLP attacks may be first-order. This corroborates human studies from (Reevaluating-Morris2020mb), which showed that humans rate adversarial examples from the attacks of (Alzantot2018-ti) and (TextFooler-Jin2019-re) to preserve semantics around 65% of the time. Table 1 : Results of first-order and second-order attacks on BERTScore and the Universal Sentence Encoder (USE). Values are ACCS, a measure of constraint robustness. A higher ACCS score indicates a better constraint. Across models and datasets, USE achieves a higher ACCS than BERTScore. Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 451, "end": 458, "text": "Table 1", "ref_id": null }, { "start": 739, "end": 746, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.4" }, { "text": "Sentence length, S, and \u270f. As input x grows in length, the a single word swap will have an increasingly smaller impact on S(x, x adv ). Some NLP attacks that use sentence encoders as a constraint have combatted this problem by measuring the sentence encodings within a fixed-length window of words around each substitution. For example, Jin et al. (2019) considers a window of 15 words around each substitution. We chose instead to encode the entire input, as both the Universal Sentence Encoder and BERTScore were trained using full inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Applications beyond NLP. Table 2 lists examples of validity metrics across domains. To the best of our knowledge, no domains outside of NLP have suggested to use a deep learning model as a constraint (Chakraborty et al., 2018) . If adversarial attacks in other domains do decide to use deep learning models to measure imperceptibility, they can follow our method to compare imperceptibility models and evaluate their robustness.", "cite_spans": [ { "start": 200, "end": 226, "text": "(Chakraborty et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The Catch-22 of second-order adversarial examples. Any adversarial generation method for that employs an auxiliary model as a constraint may generate second-order adversarial examples. Although NLP is the only domain to use a model as a constraint thus far, this problem is likely to appear in other domains in the future. This makes the problem of second-order adversarial detection more important.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Towards better constraints on NLP adversarial examples. Neither USE nor BERTScore scored especially high ACCS scores on any of the stud-ied tasks. We leave it to future work to explore more choices of semantic similarity model and find one that is more suitable as a constraint on NLP adversarial examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We can categorize adversarial attacks in NLP based on their chosen definition of imperceptibility: generally adversarial attacks in NLP aim either for visual imperceptibility or in semantic imperceptibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Visual imperceptibility. These adversarial example generation techniques focus on characterlevel modifications that a fast-reading human may not notice. HotFlip (Ebrahimi et al., 2017) uses the gradient of a character-level classifier to guide the attack, and can often change the classifier output with a single flip. (HotFlip also studies word-level replacements, but only briefly.) Other works (Belinkov and Bisk, 2017; Gao et al., 2018; Pruthi et al., 2019; Jones et al., 2020) craft adversarial examples by inducing 'typos' in the input sequence x, for example, by swapping two characters with one another, or shuffling the characters in an input. In these cases, imperceptibility is generally modeled using string edit distance, so second-order adversarial examples do not exist.", "cite_spans": [ { "start": 161, "end": 184, "text": "(Ebrahimi et al., 2017)", "ref_id": "BIBREF7" }, { "start": 397, "end": 422, "text": "(Belinkov and Bisk, 2017;", "ref_id": "BIBREF1" }, { "start": 423, "end": 440, "text": "Gao et al., 2018;", "ref_id": "BIBREF8" }, { "start": 441, "end": 461, "text": "Pruthi et al., 2019;", "ref_id": "BIBREF22" }, { "start": 462, "end": 481, "text": "Jones et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Semantic imperceptibility. This work focuses on this class of NLP adversarial examples, in which x adv must preserve the semantics of x. Most work generates these x adv by swapping iteratively swapping words in x with synonyms, and filtering by some model-based constraint (Kuleshov et al., 2018; Ren et al., 2019; Jin et al., 2019; Garg and Ramakrishnan, 2020) . Some alternative algorithms have been proposed: Zhao et al. (2017) encode x into a latent representation using a generative adversarial network, apply the perturbation to the latent vector, and decode to obtain x adv . Ribeiro et al. (2018) craft 'adversarial rules' (mappings from x ! x adv ) by a combination of back-translation and human evaluation. TextBugger (Li et al., 2018) crafts adversarial examples using word-level substitutions, but uniquely chooses between characterlevel perturbations (exploiting imperceptibility in appearance) and word-level synonym swaps (exploiting imperceptibility in meaning). Although there have been many adversarial attacks proposed on NLP models (Zhang et al., 2020a) , surprisingly few constraints have been ex-Adversarial Example Domain Constraint Images (Goodfellow et al., 2014) maximum`i nf norm Audio (Carlini and Wagner, 2018) minimum distortion in Decibels (dB) Graphs maximum number of edges modified Text (Zhang et al., 2020b) minimum USE cosine similarity Table 2 : Examples of constraints across adversarial example domains. All metrics are calculated between the original input and any potentially valid adversarial perturbation.", "cite_spans": [ { "start": 273, "end": 296, "text": "(Kuleshov et al., 2018;", "ref_id": "BIBREF15" }, { "start": 297, "end": 314, "text": "Ren et al., 2019;", "ref_id": "BIBREF24" }, { "start": 315, "end": 332, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 333, "end": 361, "text": "Garg and Ramakrishnan, 2020)", "ref_id": "BIBREF9" }, { "start": 728, "end": 745, "text": "(Li et al., 2018)", "ref_id": "BIBREF17" }, { "start": 1052, "end": 1073, "text": "(Zhang et al., 2020a)", "ref_id": "BIBREF32" }, { "start": 1163, "end": 1188, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF10" }, { "start": 1213, "end": 1239, "text": "(Carlini and Wagner, 2018)", "ref_id": "BIBREF3" }, { "start": 1321, "end": 1342, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 1373, "end": 1380, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "plored. Alzantot et al. (2018) was the first to propose the use of a language model as a constraint on grammaticality. Kuleshov et al. (2018) uses both a language model to enforce grammaticality and skip-thought vectors (Kiros et al., 2015) , a form of sentence encoding, to enforce semantic preservation. Several attacks have used the Universal Sentence Encoder to enforce semantic preservation (Li et al., 2018; Jin et al., 2019; Garg and Ramakrishnan, 2020) . Morris et al. (2020a) categorized constraints on NLP adversarial examples into four groups: semantics, grammaticality, overlap, and non-suspicion. They also explored the effect of varying constraint threshold on the quality of generated adversarial examples, as judged by human annotators. Xu et al. (2020) examined the quality of generated adversarial examples based on different thresholds of attack success rate. However, neither study considered adversarial examples that may have arisen from constraints, or explored evaluation via running adversarial attacks on the constraints directly.", "cite_spans": [ { "start": 8, "end": 30, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF0" }, { "start": 119, "end": 141, "text": "Kuleshov et al. (2018)", "ref_id": "BIBREF15" }, { "start": 220, "end": 240, "text": "(Kiros et al., 2015)", "ref_id": "BIBREF14" }, { "start": 396, "end": 413, "text": "(Li et al., 2018;", "ref_id": "BIBREF17" }, { "start": 414, "end": 431, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 432, "end": 460, "text": "Garg and Ramakrishnan, 2020)", "ref_id": "BIBREF9" }, { "start": 753, "end": 769, "text": "Xu et al. (2020)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Work in generating adversarial examples in NLP has relied on outside models to evaluate imperceptibility. While useful, this inadvertently increases the size of the attack space. We propose methods for analyzing constraints' susceptibility to secondorder adversarial examples, including the ACCS and associated constraint robustness curve metric. This requires us to design an attack specific to semantic similarity models. We demonstrate these methods with a comparison of two models used in constraints, the Universal Sentence Encoder and BERTScore. We would especially like to see future research examine constraint robustness curves across more constraints and different attack designs. We hope that future researchers can use our method when choosing constraints for NLP adversarial example generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "C-statistic is another name for AUC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is standard for NLP attacks on entailment models to only edit the hypothesis(Alzantot et al., 2018;Zhao et al., 2017;Jin et al., 2019)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Rotten Tomatoes dataset is sometimes called Movie Review, or MR, dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work arose out of a series of discussions with Eli Lifland about adversarial examples in NLP. Thanks to him and many others, including Jeffrey Yoo, Jack Lanchantin, Di Jin, Yanjun Qi, and Charles Frye, for engaging in similar discussions, which ranged from empirical to downright philosophical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Generating natural language adversarial examples", "authors": [ { "first": "Moustafa", "middle": [], "last": "Alzantot", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Bo-Jhang", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Mani", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Synthetic and natural noise both break neural machine translation", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Audio adversarial examples: Targeted attacks on Speech-to-Text", "authors": [ { "first": "Nicholas", "middle": [], "last": "Carlini", "suffix": "" }, { "first": "David", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas Carlini and David Wagner. 2018. Audio ad- versarial examples: Targeted attacks on Speech-to- Text.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Universal sentence encoder", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St", "suffix": "" }, { "first": "Noah", "middle": [], "last": "John", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Adversarial attacks and defences: A survey", "authors": [ { "first": "Anirban", "middle": [], "last": "Chakraborty", "suffix": "" }, { "first": "Manaar", "middle": [], "last": "Alam", "suffix": "" } ], "year": 2018, "venue": "Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anirban Chakraborty, Manaar Alam, Vishal Dey, Anu- pam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "HotFlip: White-Box adversarial examples for text classification", "authors": [ { "first": "Javid", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Anyi", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. HotFlip: White-Box adversarial exam- ples for text classification.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Black-box generation of adversarial text sequences to evade deep learning classifiers", "authors": [ { "first": "Ji", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Lanchantin", "suffix": "" }, { "first": "Mary", "middle": [ "Lou" ], "last": "Soffa", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BAE: BERT-based adversarial examples for text classification", "authors": [ { "first": "Siddhant", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Goutham", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text classification.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "J", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Receiver operating characteristic (ROC) curve analysis for medical diagnostic test evaluation", "authors": [ { "first": "Karimollah", "middle": [], "last": "Hajian-Tilaki", "suffix": "" } ], "year": 2013, "venue": "Caspian J Intern Med", "volume": "4", "issue": "2", "pages": "627--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karimollah Hajian-Tilaki. 2013. Receiver operating characteristic (ROC) curve analysis for medical di- agnostic test evaluation. Caspian J Intern Med, 4(2):627-635.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Is bert really robust? natural language attack on text classification and entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11932" ] }, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. arXiv preprint arXiv:1907. 11932.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Skip-Thought vectors", "authors": [ { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Torralba", "suffix": "" }, { "first": ";", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "N D", "middle": [], "last": "Cortes", "suffix": "" }, { "first": "D D", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "R", "middle": [], "last": "Sugiyama", "suffix": "" }, { "first": "", "middle": [], "last": "Garnett", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "3294--3302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-Thought vectors. In C Cortes, N D Lawrence, D D Lee, M Sugiyama, and R Garnett, editors, Advances in Neural Informa- tion Processing Systems 28, pages 3294-3302. Cur- ran Associates, Inc.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adversarial examples for natural language classification problems", "authors": [ { "first": "Volodymyr", "middle": [], "last": "Kuleshov", "suffix": "" }, { "first": "Shantanu", "middle": [], "last": "Thakoor", "suffix": "" }, { "first": "Tingfung", "middle": [], "last": "Lau", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Ermon", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial exam- ples for natural language classification problems.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "TextBugger: Generating adversarial text against real-world applications", "authors": [ { "first": "Jinfeng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shouling", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Du", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating adversarial text against real-world applications.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "WordNet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Commun. ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. WordNet: a lexical database for english. Commun. ACM, 38(11):39-41.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language", "authors": [ { "first": "X", "middle": [], "last": "John", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Lifland", "suffix": "" }, { "first": "", "middle": [], "last": "Lanchantin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John X Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020a. Reevaluating adversarial examples in natural language.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP", "authors": [ { "first": "X", "middle": [], "last": "John", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Jin", "middle": [ "Yong" ], "last": "Lifland", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Yoo", "suffix": "" }, { "first": "Di", "middle": [], "last": "Grigsby", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Jin", "suffix": "" }, { "first": "", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020b. TextAttack: A frame- work for adversarial attacks, data augmentation, and adversarial training in NLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Combating adversarial misspellings with robust word recognition", "authors": [ { "first": "Danish", "middle": [], "last": "Pruthi", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Zachary", "middle": [ "C" ], "last": "Lipton", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danish Pruthi, Bhuwan Dhingra, and Zachary C Lip- ton. 2019. Combating adversarial misspellings with robust word recognition.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Generating natural language adversarial examples through probability weighted word saliency", "authors": [ { "first": "Yihe", "middle": [], "last": "Shuhuai Ren", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Che", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "1085--1097", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. pages 1085-1097.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Semantically equivalent adversarial rules for debugging NLP models", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "856--865", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. pages 856-865.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. pages 1631-1642.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Adversarial examples on graph data: Deep insights into attack and defense", "authors": [ { "first": "Huijun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yuriy", "middle": [], "last": "Tyshetskiy", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Docherty", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Liming", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019. Adver- sarial examples on graph data: Deep insights into attack and defense.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Elephant in the room: An evaluation framework for assessing adversarial examples in NLP", "authors": [ { "first": "Ying", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Antonio Jose Jimeno", "middle": [], "last": "Yepes", "suffix": "" }, { "first": "Jey Han", "middle": [], "last": "Lau", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Xu, Xu Zhong, Antonio Jose Jimeno Yepes, and Jey Han Lau. 2020. Elephant in the room: An evalu- ation framework for assessing adversarial examples in NLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "PAWS-X: A cross-lingual adversarial dataset for paraphrase identification", "authors": [ { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adver- sarial dataset for paraphrase identification.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "BERTScore: Evaluating text generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Adversarial attacks on deep-learning models in natural language processing: A survey", "authors": [ { "first": "Wei", "middle": [ "Emma" ], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Ahoud", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Chenliang", "middle": [], "last": "Alhazmi", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "ACM Trans. Intell. Syst. Technol", "volume": "11", "issue": "3", "pages": "1--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. 2020a. Adversarial attacks on deep-learning models in natural language process- ing: A survey. ACM Trans. Intell. Syst. Technol., 11(3):1-41.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Adversarial attacks on deep-learning models in natural language processing: A survey", "authors": [ { "first": "Wei", "middle": [ "Emma" ], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Ahoud", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Chenliang", "middle": [], "last": "Alhazmi", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "ACM Trans. Intell. Syst. Technol", "volume": "11", "issue": "3", "pages": "1--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. 2020b. Adversarial attacks on deep-learning models in natural language process- ing: A survey. ACM Trans. Intell. Syst. Technol., 11(3):1-41.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Generating natural adversarial examples", "authors": [ { "first": "Zhengli", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Dheeru", "middle": [], "last": "Dua", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. Generating natural adversarial examples.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "A second-order adversarial example in NLP.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "An example constraint robustness curve. ACCS is defined as the normalized area under the constraint robustness curve.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Attack prototypes generated for attacks run in TextAttack. The top shows the first-order attack, run against a classification model using the semantic similarity model as a constraint. The bottom shows the second-order attack, run directly against a semantic similarity model. During experiments, [Constraint] is either USE or BERTScore, [\u270f] is varied from 0.5 to 1 or 0.75 to 1, and [ ] is set to 3.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "First-order and second-order adversarial examples generated by our attacks on BERT-base finetuned on the SST-2 dataset.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "Constraint robustness curves across attacks. The Universal Sentence Encoder finds more adversarial examples in each model while yielding fewer adversarial examples via second-order attacks. ACCS results are detailed in", "uris": null, "type_str": "figure", "num": null } } } }