{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:05.660702Z" }, "title": "Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples", "authors": [ { "first": "Jin", "middle": [], "last": "Yong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" }, { "first": "John", "middle": [ "X" ], "last": "Morris", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" }, { "first": "Eli", "middle": [], "last": "Lifland", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Virginia", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study the behavior of several black-box search algorithms used for generating adversarial examples for natural language processing (NLP) tasks. We perform a fine-grained analysis of three elements relevant to search: search algorithm, search space, and search budget. When new search algorithms are proposed in past work, the attack search space is often modified alongside the search algorithm. Without ablation studies benchmarking the search algorithm change with the search space held constant, one cannot tell if an increase in attack success rate is a result of an improved search algorithm or a less restrictive search space. Additionally, many previous studies fail to properly consider the search algorithms' run-time cost, which is essential for downstream tasks like adversarial training. Our experiments provide a reproducible benchmark of search algorithms across a variety of search spaces and query budgets to guide future research in adversarial NLP. Based on our experiments, we recommend greedy attacks with word importance ranking when under a time constraint or attacking long inputs, and either beam search or particle swarm optimization otherwise.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We study the behavior of several black-box search algorithms used for generating adversarial examples for natural language processing (NLP) tasks. We perform a fine-grained analysis of three elements relevant to search: search algorithm, search space, and search budget. When new search algorithms are proposed in past work, the attack search space is often modified alongside the search algorithm. Without ablation studies benchmarking the search algorithm change with the search space held constant, one cannot tell if an increase in attack success rate is a result of an improved search algorithm or a less restrictive search space. Additionally, many previous studies fail to properly consider the search algorithms' run-time cost, which is essential for downstream tasks like adversarial training. Our experiments provide a reproducible benchmark of search algorithms across a variety of search spaces and query budgets to guide future research in adversarial NLP. Based on our experiments, we recommend greedy attacks with word importance ranking when under a time constraint or attacking long inputs, and either beam search or particle swarm optimization otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Research has shown that current deep neural network models lack the ability to make correct predictions on adversarial examples (Szegedy et al., 2013) . The field of investigating the adversarial robustness of NLP models has seen growing interest, both in contributing new attack methods 1 for generating adversarial examples (Ebrahimi et al., 2017; Gao et al., 2018; Alzantot et al., 2018; Jin et al., 2019; Ren et al., 2019; Zang et al., 2020) and better training strategies to make models resistant to adversaries (Jia et al., 2019; Goodfellow et al., 2014) .", "cite_spans": [ { "start": 128, "end": 150, "text": "(Szegedy et al., 2013)", "ref_id": "BIBREF23" }, { "start": 326, "end": 349, "text": "(Ebrahimi et al., 2017;", "ref_id": "BIBREF7" }, { "start": 350, "end": 367, "text": "Gao et al., 2018;", "ref_id": "BIBREF8" }, { "start": 368, "end": 390, "text": "Alzantot et al., 2018;", "ref_id": "BIBREF1" }, { "start": 391, "end": 408, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 409, "end": 426, "text": "Ren et al., 2019;", "ref_id": "BIBREF21" }, { "start": 427, "end": 445, "text": "Zang et al., 2020)", "ref_id": "BIBREF25" }, { "start": 517, "end": 535, "text": "(Jia et al., 2019;", "ref_id": "BIBREF11" }, { "start": 536, "end": 560, "text": "Goodfellow et al., 2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies formulate NLP adversarial attacks as a combinatorial search task and feature the specific search algorithm they use as the key contribution (Zhang et al., 2019b) . The search algorithm aims to perturb a text input with language transformations such as misspellings or synonym substitutions in order to fool a target NLP model when the perturbation adheres to some linguistic constraints (e.g., edit distance, grammar constraint, semantic similarity constraint) (Morris et al., 2020a) . Many search algorithms have been proposed for this process, including varieties of greedy search, beam search, and population-based search.", "cite_spans": [ { "start": 155, "end": 176, "text": "(Zhang et al., 2019b)", "ref_id": "BIBREF28" }, { "start": 476, "end": 498, "text": "(Morris et al., 2020a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The literature includes a mixture of incomparable and unclear results when comparing search strategies since studies often fail to consider the other two necessary primitives in the search process: the search space (choice of transformation and constraints) and the search budget (in queries to the victim model). The lack of a consistent benchmark on search algorithms has hindered the use of adversarial examples to understand and to improve NLP models. In this work, we attempt to clear the air by answering the following question: Which search algorithm should NLP researchers pick for generating NLP adversarial examples?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on black-box search algorithms due to their practicality and prevalence in the NLP attack literature. Our goal is to understand to what extent the choice of search algorithms matter in generating text adversarial examples and how different search algorithms compare when we hold the search space constant or when we standardize the search cost. We select three families of search algorithms proposed from literature and benchmark their performance on generating adversarial examples for sentiment classification and textual entailment tasks. Our main findings can be summarized as the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Across three datasets and three search spaces, we found that beam search and particle swarm optimization are the best algorithms in terms of attack success rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 When under a time constraint or when the input text is long, greedy with word importance ranking is preferred and offers sufficient performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Complex algorithms such as PWWS (Ren et al., 2019) and genetic algorithm (Alzantot et al., 2018) are often less performant than simple greedy methods both in terms of attack success rate and speed.", "cite_spans": [ { "start": 75, "end": 98, "text": "(Alzantot et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Morris et al. (2020b) formulated the process of generating natural language adversarial examples as a system of four components: a goal function, a set of constraints, a transformation, and a search algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Components of an NLP Attack", "sec_num": "2.1" }, { "text": "x to x 0 that fools a predictive NLP model by both achieving some goal (like fooling the model into predicting the wrong classification label) and fulfilling certain constraints. The search algorithm attempts to find a sequence of transformations that results in a successful perturbation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Such a system searches for a perturbation from", "sec_num": null }, { "text": "Search Algorithm: Recent methods proposed for generating adversarial examples in NLP frame their approach as a combinatorial search problem. This is necessary because of the exponential nature of the search space. Consider the search space for an adversarial attack that replaces words with synonyms: If a given sequence of text consists of W words, and each word has T potential substitutions, the total number of perturbed inputs to consider is (T + 1) W 1. Thus, the graph of all potential adversarial examples for a given input is far too large for an exhaustive search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Elements of a Search Process", "sec_num": "2.2" }, { "text": "While heuristic search algorithms cannot guarantee an optimal solution, they can be employed to efficiently search this space for a valid adversarial example. Studies on NLP attacks have explored various heuristic search algorithms, including beam search (Ebrahimi et al., 2017) , genetic algorithm (Alzantot et al., 2018) , and greedy method with word importance ranking (Gao et al., 2018; Jin et al., 2019; Ren et al., 2019) .", "cite_spans": [ { "start": 255, "end": 278, "text": "(Ebrahimi et al., 2017)", "ref_id": "BIBREF7" }, { "start": 299, "end": 322, "text": "(Alzantot et al., 2018)", "ref_id": "BIBREF1" }, { "start": 372, "end": 390, "text": "(Gao et al., 2018;", "ref_id": "BIBREF8" }, { "start": 391, "end": 408, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 409, "end": 426, "text": "Ren et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Elements of a Search Process", "sec_num": "2.2" }, { "text": "Search Space: In addition to its search method, an NLP attack is defined by how it chooses its search space. The search space is mainly determined by two things: a transformation, which defines how the original text is perturbed (e.g. word substitution, word deletion) and the set of linguistic constraints (e.g minimum semantic similarity, correct grammar) enforced to ensure that the perturbed text is a valid adversarial example. A larger search space corresponds to a looser definition of a valid adversarial example. With a looser definition, the search space includes more candidate adversarial examples. The more candidates there are, the more likely the search is to find an example that fools the victim model -thereby achieving a higher attack success rate (Morris et al., 2020b) .", "cite_spans": [ { "start": 767, "end": 789, "text": "(Morris et al., 2020b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Elements of a Search Process", "sec_num": "2.2" }, { "text": "Search Cost/Budget: Furthermore, most works do not consider the runtime of the search algorithms. This has created a large, previously unspoken disparity in runtimes of proposed works. Population-based algorithms like Alzantot et al. (2018) and Zang et al. (2020) are significantly more expensive than greedy algorithms like Jin et al. (2019) and Ren et al. (2019) . Additionally, greedy algorithms with word importance ranking are linear with respect to input length, while beam search algorithms are quadratic. In tasks such as adversarial training, adversarial examples must be generated quickly, and a more efficient algorithm may preferable-even at the expense of a lower attack success rate.", "cite_spans": [ { "start": 218, "end": 240, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" }, { "start": 245, "end": 263, "text": "Zang et al. (2020)", "ref_id": "BIBREF25" }, { "start": 325, "end": 342, "text": "Jin et al. (2019)", "ref_id": "BIBREF12" }, { "start": 347, "end": 364, "text": "Ren et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Elements of a Search Process", "sec_num": "2.2" }, { "text": "Past studies on NLP attacks that propose new search algorithms often also propose a slightly altered search space, by proposing either new transformations or new constraints. When new search algorithms are benchmarked in a new search space, they cannot be easily compared with search algorithms from other attacks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating Novel Search Algorithms", "sec_num": "2.3" }, { "text": "To show improvements over a search method from previous work, a new search method must be benchmarked in the search space of the original method. However, many works fail to set the search space to be consistent when comparing their method to baseline methods. For exam-ple, Jin et al. (2019) compares its TextFooler method against Alzantot et al. (2018) 's method without accounting for the fact that TextFooler uses the Universal Sentence Encoder (Cer et al., 2018) to filter perturbed text while Alzantot et al. (2018) uses Google 1 billion words language model (Chelba et al., 2013) . A more severe case is Zhang et al. (2019a) 2, which claims that its Metropolis-Hastings sampling method is superior to Alzantot et al. 2018without setting any constraints -like Alzantot et al. (2018) does -that ensure that the perturbed text preserves the original semantics of the text.", "cite_spans": [ { "start": 332, "end": 354, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" }, { "start": 449, "end": 467, "text": "(Cer et al., 2018)", "ref_id": "BIBREF3" }, { "start": 499, "end": 521, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" }, { "start": 565, "end": 586, "text": "(Chelba et al., 2013)", "ref_id": "BIBREF4" }, { "start": 766, "end": 788, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Novel Search Algorithms", "sec_num": "2.3" }, { "text": "We do note that Ren et al. 2019and Zang et al. (2020) do provide comparisons where the search spaces are consistent. However, these works consider a small number of search algorithms as baseline methods, and fail to provide a comprehensive comparison of methods proposed in the literature.", "cite_spans": [ { "start": 35, "end": 53, "text": "Zang et al. (2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating Novel Search Algorithms", "sec_num": "2.3" }, { "text": "As defined in Section 2.1, each NLP adversarial attack includes four components: a goal function, constraints, a transformation, and a search algorithm. We define the attack search space as the set of perturbed text x 0 that are generated for an original input x via valid transformations and satisfy a set of linguistic constraints. The goal of a search algorithm is to find those x 0 that achieves the attack goal function (i.e. fooling a victim model) as fast as it can.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "Word-swap transformations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "Assuming x = (x 1 , . . . , x i , . . . , x n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": ", a perturbed text x 0 can be generated by swapping x i with altered x 0 i . The swap can occur at word, character, or sentence level, depending on the granularity of x i . Most works in literature choose to swap out words; therefore, we choose to focus on word-swap transformations for our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "Constraints: Morris et al. (2020b) proposed a set of linguistic constraints to enforce that x and perturbed x 0 should be similar in both meaning and fluency to make x 0 a valid potential adversarial example. This indicates that the search space should ensure x and x 0 are close in semantic embedding space. Multiple automatic constraint ensuring strategies have been proposed in the literature. For example, when swapping word x i with x 0 i , we can require that the cosine similarity between word embedding vectors e x i and e x 0 i meet certain minimum threshold. More details on the specific constraints we use are in Section A.1. Now we use notation T (x) = x 0 to denote transformations perturbing x to x 0 , and assume the j th constraints as Boolean functions C j (x, x 0 ) indicating whether x 0 satisfies the constraint C j . Then, we can define the search space S mathematically as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S(x) = {T (x)|C j (x, T (x)) 8j 2 [m]}", "eq_num": "(1)" } ], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "The goal of a search algorithm is to find x 0 2 S(x) such that x 0 succeeds in fooling the victim model. Table 1 describes three search spaces we use to benchmark the search algorithms. Details of transformations and constraints used in defining these search spaces are in Appendix Section A.1. ", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 112, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Defining Search Spaces", "sec_num": "3.1" }, { "text": "Search algorithms evaluate potential perturbations before branching out to other solutions. In the case of an untargeted attack against a classifer, the adversary aims to find examples that make the classifier predict the wrong class (label) for x 0 . Here the assumption is that the ground truth label of x 0 is the same as that of the original x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Scoring Function", "sec_num": "3.2" }, { "text": "Naturally, we use a heuristic scoring function score defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Scoring Function", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(x 0 ) = 1 F y (x 0 )", "eq_num": "(2)" } ], "section": "Heuristic Scoring Function", "sec_num": "3.2" }, { "text": "where F y (x) is the probability of class y predicted by the model and y is the ground truth output of original text x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Heuristic Scoring Function", "sec_num": "3.2" }, { "text": "We select the following five search algorithms proposed for generating adversarial examples, summarized in Table 2 . All search algorithms are limited to modifying each word at most once.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Search Algorithms", "sec_num": "3.3" }, { "text": "Deterministic? Hyperparameters Num. Queries Beam Search (Ebrahimi et al., 2017) 3 (Gao et al., 2018; Jin et al., 2019; Ren et al., 2019) 3", "cite_spans": [ { "start": 56, "end": 79, "text": "(Ebrahimi et al., 2017)", "ref_id": "BIBREF7" }, { "start": 82, "end": 100, "text": "(Gao et al., 2018;", "ref_id": "BIBREF8" }, { "start": 101, "end": 118, "text": "Jin et al., 2019;", "ref_id": "BIBREF12" }, { "start": 119, "end": 136, "text": "Ren et al., 2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "b (beam width) O(b \u21e4 W 2 \u21e4 T ) Greedy [Beam Search with b=1] 3 - O(W 2 \u21e4 T ) Greedy w. Word Importance Ranking", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "- O(W \u21e4 T ) Genetic Algorithm (Alzantot et al., 2018) 7 p (population size), g (number of iterations) O(g \u21e4 p \u21e4 T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "Particle Swarm Optimization (Zang et al., 2020) 7 p (population size), g (number of iterations) Beam Search For given text x, all the possible perturbed texts x 0 generated by substituting each word x i are scored using the heuristic scoring function, and the top b texts are kept (b is called the \"beam width\"). Then, the process repeats by further perturbing each of the top b perturbed texts to generate the next set of candidates.", "cite_spans": [ { "start": 28, "end": 47, "text": "(Zang et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "O(g \u21e4 p \u21e4 W \u21e4 T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "Greedy Search Like beam search, each x i are considered for subsitution. We take the best perturbation across all possible perturbations, and repeat until we succeed or run out of possible perturbations. It equals to a beam search with b set to 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search Algorithm", "sec_num": null }, { "text": "Words of the given input x are ranked according to some importance function. Then, in order of descending importance, word x i is substituted with x 0 i that maximizes the scoring function until the goal is achieved, or all words have been perturbed. We experiment with four different ways to determine word importance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "\u2022 UNK: Each word's importance is determined by how much the heuristic score changes when the word is substituted with an UNK token (Gao et al., 2018 ).", "cite_spans": [ { "start": 131, "end": 148, "text": "(Gao et al., 2018", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "\u2022 DEL: Each word's importance is determined by how much the heuristic score changes when the word is deleted from the original input (Jin et al., 2019 ).", "cite_spans": [ { "start": 133, "end": 150, "text": "(Jin et al., 2019", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "\u2022 PWWS: Each word's importance is determined by multiplying the change in score when the word is substituted with an UNK token with the maximum score gained by perturbing the word (Ren et al., 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "\u2022 Gradient: Similar to how Wallace et al. (2019) visualize saliency of words, each word's importance is determined by calculating the gradient of the loss with respect to the word 3 and taking its norm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "Genetic Algorithm. We implement the genetic algorithm of Alzantot et al. (2018) . At each iteration, each member of the population is perturbed by randomly choosing one word and picking the best x 0 gained by perturbing it. Then, crossover occurs between members of the population, with preference given to the more successful members. The algorithm is run for a fixed number of iterations unless it succeeds in the middle. Following Alzantot et al. (2018) , the population size was 60 and the algorithm was run for at maximum 20 iterations.", "cite_spans": [ { "start": 57, "end": 79, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" }, { "start": 434, "end": 456, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "Particle Swarm Optimization We implement the particle swarm optimization (PSO) algorithm of Zang et al. (2020) . At each iteration, each member of the population is perturbed by first generating all potential x 0 obtained by substituting each x i and then sampling one x 0 . Each member is also crossovered with the best perturb text previously found for the member (i.e. local optimum) and the best perturb text found among all members (i.e. global optimum). Following Zang et al. (2020) , the population size is set to 60 and the algorithm was run for a maximum of 20 iterations.", "cite_spans": [ { "start": 92, "end": 110, "text": "Zang et al. (2020)", "ref_id": "BIBREF25" }, { "start": 470, "end": 488, "text": "Zang et al. (2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "Our genetic algorithm and PSO implementations have one small difference from the original implementations. The original implementations contain crossover operations that further perturb the text without considering whether the resulting text meets the defined constraints. In our implementation, we check if the text produced by these subroutines meets our constraints to ensure a consistent search space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy with Word Importance Ranking (WIR)", "sec_num": null }, { "text": "We attack BERT-base (Devlin et al., 2018) and an LSTM fine-tuned on three different datasets:", "cite_spans": [ { "start": 20, "end": 41, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Victim Models", "sec_num": "3.4" }, { "text": "\u2022 Yelp polarity reviews (Zhang et al., 2015 ) (sentiment classification)", "cite_spans": [ { "start": 24, "end": 43, "text": "(Zhang et al., 2015", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Victim Models", "sec_num": "3.4" }, { "text": "\u2022 Movie Reviews (MR) (Pang and Lee, 2005 ) (sentiment classification)", "cite_spans": [ { "start": 21, "end": 40, "text": "(Pang and Lee, 2005", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Victim Models", "sec_num": "3.4" }, { "text": "\u2022 Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) (textual entailment).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Victim Models", "sec_num": "3.4" }, { "text": "For Yelp and SNLI dataset, we attack 1000 samples from the test set, and for MR dataset, we attack 500 samples. Language of all three datasets is English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Victim Models", "sec_num": "3.4" }, { "text": "We implement all of our attacks using the NLP attack package TextAttack 4 (Morris et al., 2020a) . TextAttack provides separate modules for search algorithms, transformations, and constraints, so we can easily compare search algorithms without changing any other part of the attack.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Morris et al., 2020a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "3.5" }, { "text": "We use attack success rate ( # of successful attacks # of total attacks ) to measure how successful each search algorithm is for attacking a victim model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.6" }, { "text": "To measure the runtime of each algorithm, we use the average number of queries to the victim model as a proxy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.6" }, { "text": "To measure the quality of adversarial examples generated by each algorithm, we use three metrics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.6" }, { "text": "1. Average percentage of words perturbed 2. Universal Sentence Encoder (Cer et al., 2018) similarity between x and x 0 3. Percent change in perplexities of x and x 0 (using GPT-2 (Radford et al., 2019)) 4 Results and Analysis Table 3 shows the results of each attack when each search algorithm is allowed to query the victim model an unlimited number of times. Word importance ranking methods makes far fewer queries than beam or population-based search, while retaining over 60% of their attack success rate in each case. Beam search (b=8) and PSO are the two most successful search algorithms in every modeldataset combination. However, PSO is more queryintensive. On average, PSO requires 6.3 times 6 more queries than beam search (b=8), but its attack success rate is only on average 1.2% higher than that of beam search (b=8).", "cite_spans": [ { "start": 71, "end": 89, "text": "(Cer et al., 2018)", "ref_id": "BIBREF3" }, { "start": 179, "end": 202, "text": "(Radford et al., 2019))", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 226, "end": 233, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.6" }, { "text": "Using number of queries to the victim model as proxy for total runtime, Figure 1 illustrates how the number of words in the input affects runtime for each algorithm. We can empirically confirm that beam and greedy search algorithms scale quadratically with input length, while word importance ranking scales linearly. For shorter datasets, this did not make a significant difference. However, for the longer Yelp dataset, the linear word importance ranking strategies are significantly more query-efficient. These observations match the expected runtimes of the algorithms described in Table 2. For shorter datasets, genetic and PSO algorithms are significantly more expensive than the other algorithms as the size of population and number of iterations are the dominating factors. Furthermore, PSO is observed to be more expensive than genetic algorithm.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 80, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 586, "end": 594, "text": "Table 2.", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Runtime Analysis", "sec_num": "4.2" }, { "text": "In a realistic attack scenario, the attacker must conserve the number of queries made to the model. To see which search method was most queryefficient, we calculated the search methods' attack success rates under a range of query budgets. Figure 2 shows the attack success rate of each search algorithm as the maximum number of queries permitted to perturb a single sample varies from 0 to 20,000 for Yelp dataset and 0 to 3000 for MR and SNLI.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 247, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Performance under Query Budget", "sec_num": "4.3" }, { "text": "For both Yelp and MR datasets, the linear (word importance ranking) methods show relatively high success rates within just a few queries, but are eventually surpassed by the slower, quadratic methods (greedy and beam search). The genetic algorithm and PSO lag behind. For SNLI, we see exceptions as the initial queries that linear methods make to determine word importance ranking does not pay off as other algorithms appear more efficient with their queries. This shows that the most effective search method depends on both on the attacker's query budget and the victim model. An attacker with a small query budget may prefer a linear method, but an attacker with a larger query budget may aim to choose a quadratic method to make more queries in exchange for a higher success rate. Table 3 : Comparison of search methods across three datasets. Models are BERT-base and LSTM fine-tuned for the respective task. \"A.S.%\" represents attack success rate and \"Avg # Queries\" represents the average number of queries made to the model per successful attacked sample. 5", "cite_spans": [], "ref_spans": [ { "start": 784, "end": 791, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Performance under Query Budget", "sec_num": "4.3" }, { "text": "Lastly, we can see that both Gradient and RAND ranking methods are initially more successful than UNK and DEL methods, which is due to the overhead involved in calculating word importance ranking for UNK and DEL -for both methods, each attack makes W queries to determine the importance of each word. Still, UNK and DEL outperform RAND at all but the smallest query budgets, indicating that the order in which words are swapped do matter. Furthermore, in 12 out 15 scenarios, UNK and DEL methods perform as well as or even better than Gradient method, which shows that they are excellent substitutes to the Gradient method for black-box attacks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance under Query Budget", "sec_num": "4.3" }, { "text": "We selected adversarial examples whose original text x was successfully attacked by all search algorithms for quality evaluation. Full results of quality evaluation are shown in Table 4 in the appendix. We can see that beam search algorithms consistently perturb the lowest percentage of words. Furthermore, we see that a fewer number of words perturbed generally corresponds with higher average USE similarity between x and x adv and a smaller increase in perplexity. This indicates that the beam search algorithms generate higher-quality adversarial examples than other search algorithms.", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 185, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Quality of Adversarial Examples", "sec_num": "4.4" }, { "text": "Across all nine scenarios, we can see that choice of search algorithm can have a modest impact on the attack success rate. Query-hungry algorithms such as beam search, genetic algorithm, and PSO perform better than fast WIR methods. Out of the WIR methods, PWWS performs significantly better than UNK and DEL methods. In every case, we see a clear trade-off of performance versus speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How to Choose A Search Algorithm", "sec_num": "5.1" }, { "text": "With this in mind, one might wonder about what the best way is to choose a suitable search algorithm. The main factor to consider is the length of the input text. If the input texts are short (e.g. sentence or two), beam search is certainly the appropriate choice: it can achieve a high success rate without sacrificing too much speed. However, when the input text is longer than a few sentences, WIR methods are the most practical choice. If one wishes for the best performance on longer inputs regardless of efficiency, beam search and PSO are the top choices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How to Choose A Search Algorithm", "sec_num": "5.1" }, { "text": "Across all tasks, the UNK and DEL methods perform about equivalently, while PWWS performs significantly better than UNK and DEL. In fact, PWWS Figure 2 : Attack success rate by query budget for each search algorithm and dataset. Similar figure for LSTM models are available in the appendix. performs better than greedy search in two cases. However, this gain in performance does come at a cost: PWWS makes far larger number of queries to the victim model to determine the word importance ranking. Out of the 15 experiments, PWWS makes more queries than greedy search in 8 of them. Yet, on average, greedy search outperforms PWWS by 2.5%.", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 151, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Effectiveness of PWWS Word Importance Ranking", "sec_num": "5.2" }, { "text": "Our results question the utility of the PWWS search method. PWWS neither offers the performance that is competitive when compared to greedy search nor the query efficiency that is competitive when compared to UNK or DEL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effectiveness of PWWS Word Importance Ranking", "sec_num": "5.2" }, { "text": "The genetic algorithm proposed by Alzantot et al. (2018) uses more queries than the greedy-based beam search (b=8) in 11 of the 15 scenarios, but only achieves a higher attack success rate in 1 scenario. Thus it is generally strictly worse than the simpler beam search (b=8), achieving a lower success rate at a higher cost.", "cite_spans": [ { "start": 34, "end": 56, "text": "Alzantot et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Effectiveness of Genetic Algorithm", "sec_num": "5.3" }, { "text": "The goal of this paper is not to introduce a new method, but to make empirical analysis towards understanding how search algorithms from recent studies contribute in generating natural language adversarial examples. We evaluated six search algorithms on BERT-base and LSTM models finetuned on three datasets. Our results show that when runtime is not a concern, the best-performing methods are beam search and particle swarm optimization. If runtime is of concern, greedy with word importance ranking is the preferable method. We hope that our findings will set a new standard for the reproducibility and evaluation of search algorithms for NLP adversarial examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Zhang et al. (2019a) is not considered in this paper due to failure to replicate its results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For sub-word tokenization scheme, we take average over all sub-words constituting the word.We test an additional scheme, which we call RAND, as an ablation study. Instead of perturbing words in order of their importance, RAND perturbs words in a random order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "TextAttack is available at https://github.com/ QData/TextAttack.6 This is with one outlier (BERT-SNLI with GLOVE word embedding) ignored. If it is included, the number jumps to 10.8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partly supported by the National Science Foundation CCF-1900676. Any Opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements on Funding:", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "COLING 2018, 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1638--1649", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Con- ference on Computational Linguistics, pages 1638- 1649.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating natural language adversarial examples", "authors": [ { "first": "Moustafa", "middle": [], "last": "Alzantot", "suffix": "" }, { "first": "Yash", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Elgohary", "suffix": "" }, { "first": "Bo-Jhang", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Mani", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Kai-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07998" ] }, "num": null, "urls": [], "raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. arXiv preprint arXiv:1804.07998.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St", "suffix": "" }, { "first": "Noah", "middle": [], "last": "John", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "One billion word benchmark for measuring progress in statistical language modeling", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Phillipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One bil- lion word benchmark for measuring progress in sta- tistical language modeling. CoRR, abs/1312.3005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hownet and its computation of meaning", "authors": [ { "first": "Zhendong", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Changling", "middle": [], "last": "Hao", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, COL-ING '10", "volume": "", "issue": "", "pages": "53--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhendong Dong, Qiang Dong, and Changling Hao. 2010. Hownet and its computation of meaning. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, COL- ING '10, page 53-56, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hotflip: White-box adversarial examples for text classification", "authors": [ { "first": "Javid", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Anyi", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Lowd", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial exam- ples for text classification. In ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Black-box generation of adversarial text sequences to evade deep learning classifiers", "authors": [ { "first": "Ji", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Lanchantin", "suffix": "" }, { "first": "Mary", "middle": [ "Lou" ], "last": "Soffa", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2018, "venue": "IEEE Security and Privacy Workshops", "volume": "", "issue": "", "pages": "50--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "J", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6572" ] }, "num": null, "urls": [], "raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adversarial example generation with syntactically controlled paraphrase networks", "authors": [ { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Certified robustness to adversarial word substitutions", "authors": [ { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Aditi", "middle": [], "last": "Raghunathan", "suffix": "" }, { "first": "Kerem", "middle": [], "last": "G\u00f6ksel", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.00986" ] }, "num": null, "urls": [], "raw_text": "Robin Jia, Aditi Raghunathan, Kerem G\u00f6ksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. arXiv preprint arXiv:1909.00986.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Is bert really robust? natural language attack on text classification and entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. ArXiv, abs/1907.11932.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Discrete adversarial attacks and submodular optimization with applications to text classification", "authors": [ { "first": "Qi", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pin-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Dimakis", "suffix": "" }, { "first": "S", "middle": [], "last": "Inderjit", "suffix": "" }, { "first": "Michael", "middle": [ "J" ], "last": "Dhillon", "suffix": "" }, { "first": "", "middle": [], "last": "Witbrock", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Machine Learning and Systems", "volume": "", "issue": "", "pages": "146--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Lei, Lingfei Wu, Pin-Yu Chen, Alex Dimakis, Inder- jit S. Dhillon, and Michael J Witbrock. 2019. Dis- crete adversarial attacks and submodular optimiza- tion with applications to text classification. In Pro- ceedings of Machine Learning and Systems 2019, pages 146-165.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Wordnet: A lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Commun. ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": { "DOI": [ "10.1145/219717.219748" ] }, "num": null, "urls": [], "raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parsimonious Black-Box adversarial attacks via efficient combinatorial optimization", "authors": [ { "first": "Seungyong", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Gaon", "middle": [], "last": "An", "suffix": "" }, { "first": "Hyun Oh", "middle": [], "last": "Song", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seungyong Moon, Gaon An, and Hyun Oh Song. 2019. Parsimonious Black-Box adversarial attacks via effi- cient combinatorial optimization.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "TextAttack: A framework for adversarial attacks in natural language processing", "authors": [ { "first": "John", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Lifland", "suffix": "" }, { "first": "Jin", "middle": [ "Yong" ], "last": "Yoo", "suffix": "" }, { "first": "Yanjun", "middle": [], "last": "Qi", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Morris, Eli Lifland, Jin Yong Yoo, and Yanjun Qi. 2020a. TextAttack: A framework for adversar- ial attacks in natural language processing. ArXiv, abs/2005.05909.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Yangfeng Ji, and Yanjun Qi. 2020b. Reevaluating adversarial examples in natural language", "authors": [ { "first": "John", "middle": [ "X" ], "last": "Morris", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Lifland", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Lanchantin", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John X. Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020b. Reevaluating adversarial examples in natural language.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Counter-fitting word vectors to linguistic constraints", "authors": [ { "first": "Nikola", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Blaise", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Lina", "middle": [ "Maria" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Pei", "middle": [ "Hao" ], "last": "Su", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": 2016, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikola Mrksic, Diarmuid\u00d3 S\u00e9aghdha, Blaise Thom- son, Milica Gasic, Lina Maria Rojas-Barahona, Pei hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In HLT-NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "115--124", "other_ids": { "DOI": [ "10.3115/1219840.1219855" ] }, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. In Proceed- ings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 115- 124, Ann Arbor, Michigan. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Generating natural language adversarial examples through probability weighted word saliency", "authors": [ { "first": "Yihe", "middle": [], "last": "Shuhuai Ren", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "He", "suffix": "" }, { "first": "", "middle": [], "last": "Che", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1085--1097", "other_ids": { "DOI": [ "10.18653/v1/P19-1103" ] }, "num": null, "urls": [], "raw_text": "Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Semantically equivalent adversarial rules for debugging NLP models", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "856--865", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. pages 856-865.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Intriguing properties of neural networks", "authors": [ { "first": "Christian", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Joan", "middle": [], "last": "Bruna", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1312.6199" ] }, "num": null, "urls": [], "raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Al-lenNLP Interpret: A framework for explaining predictions of NLP models", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Tuyls", "suffix": "" }, { "first": "Junlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra- manian, Matt Gardner, and Sameer Singh. 2019. Al- lenNLP Interpret: A framework for explaining pre- dictions of NLP models. In Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word-level textual adversarial attacking as combinatorial optimization", "authors": [ { "first": "Yuan", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Fanchao", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Chenghao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6066--6080", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Generating fluent adversarial examples for natural languages", "authors": [ { "first": "Huangzhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Miao", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5564--5569", "other_ids": { "DOI": [ "10.18653/v1/P19-1559" ] }, "num": null, "urls": [], "raw_text": "Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019a. Generating fluent adversarial examples for natural languages. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5564-5569, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "*", "middle": [], "last": "", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Generating textual adversarial examples for deep learning models: A survey", "authors": [ { "first": "Wei", "middle": [ "Emma" ], "last": "Zhang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Ahoud", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "F", "middle": [], "last": "Abdulrahmn", "suffix": "" }, { "first": "", "middle": [], "last": "Alhazmi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Emma Zhang, Quan Z. Sheng, and Ahoud Abdul- rahmn F. Alhazmi. 2019b. Generating textual adver- sarial examples for deep learning models: A survey. CoRR, abs/1901.06796.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649-657. Curran Associates, Inc.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Number of queries vs. length of input text. Similar figure for LSTM models are available in the appendix.", "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "text": "The three search spaces in our benchmarking.", "type_str": "table", "html": null, "content": "" }, "TABREF2": { "num": null, "text": "Different search algorithms proposed for NLP attacks. W indicates the number of words in the input. T is the maximum number of transformation options for a given input.", "type_str": "table", "html": null, "content": "
" } } } }