{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:04:54.909195Z" }, "title": "Exploring Weaknesses of VQA Models through Attribution Driven Insights", "authors": [ { "first": "Shaunak", "middle": [], "last": "Halbe", "suffix": "", "affiliation": {}, "email": "shaunak9@ieee.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Deep Neural Networks have been successfully used for the task of Visual Question Answering for the past few years owing to the availability of relevant large scale datasets. However these datasets are created in artificial settings and rarely reflect the real world scenario. Recent research effectively applies these VQA models for answering visual questions for the blind. Despite achieving high accuracy these models appear to be susceptible to variation in input questions.We analyze popular VQA models through the lens of attribution (input's influence on predictions) to gain valuable insights. Further, We use these insights to craft adversarial attacks which inflict significant damage to these systems with negligible change in meaning of the input questions. We believe this will enhance development of systems more robust to the possible variations in inputs when deployed to assist the visually impaired.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Deep Neural Networks have been successfully used for the task of Visual Question Answering for the past few years owing to the availability of relevant large scale datasets. However these datasets are created in artificial settings and rarely reflect the real world scenario. Recent research effectively applies these VQA models for answering visual questions for the blind. Despite achieving high accuracy these models appear to be susceptible to variation in input questions.We analyze popular VQA models through the lens of attribution (input's influence on predictions) to gain valuable insights. Further, We use these insights to craft adversarial attacks which inflict significant damage to these systems with negligible change in meaning of the input questions. We believe this will enhance development of systems more robust to the possible variations in inputs when deployed to assist the visually impaired.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visual Question Answering (VQA) is a semantic task, where a model attempts to answer a natural language question based on the visual context. With the emergence of large scale datasets (Antol et al., 2015; Goyal et al., 2017; Krishna et al., 2016; Malinowski and Fritz, 2014; , There has been outstanding progress in VQA systems in terms of accuracy obtained on the associated test sets. However these systems are seen to somewhat fail when applied in real-world situations (Gurari et al., 2018; Agrawal et al., 2016 ) majorly due to a significant domain shift and an inherent language/image bias. A direct application of VQA is to answer the questions for images captured by blind people. The VizWiz (Gurari et al., 2018 ) is a first of its kind goal oriented dataset which reflects the challenges conventional VQA models might face when applied to assist the blind. The questions in this dataset are not straightforward and are often conversational which is natural knowing that they have been asked by visually impaired people for assistance. Due to unsuitable images or irrelevant questions most of these questions are unanswerable. These questions differ from those in other datasets mainly in the type of answer they are expecting. The questions are often subjective and require the algorithm to actually read (OCR)/ detect/ count, moreover understand the image before answering. We believe models trained on such a challenging dataset must be interpretable and should be analyzed for robustness to ensure they are accurate for the right reasons.", "cite_spans": [ { "start": 185, "end": 205, "text": "(Antol et al., 2015;", "ref_id": "BIBREF1" }, { "start": 206, "end": 225, "text": "Goyal et al., 2017;", "ref_id": "BIBREF6" }, { "start": 226, "end": 247, "text": "Krishna et al., 2016;", "ref_id": "BIBREF11" }, { "start": 248, "end": 275, "text": "Malinowski and Fritz, 2014;", "ref_id": "BIBREF13" }, { "start": 474, "end": 495, "text": "(Gurari et al., 2018;", "ref_id": "BIBREF7" }, { "start": 496, "end": 516, "text": "Agrawal et al., 2016", "ref_id": "BIBREF0" }, { "start": 701, "end": 721, "text": "(Gurari et al., 2018", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Deep Neural Networks often lack interpretability but are widely used owing to their high accuracy on the representative test sets. In most applications a high test-set accuracy is sufficient, but in certain sensitive areas, understanding causality is crucial. When deploying such VQA models to aid the blind, utmost care needs to be taken to prevent the model from answering wrongly to avoid possible accidents. In the past, various saliency methods have been used to interpret models which have textual inputs. Vanilla Gradient Method (Simonyan et al., 2013) visualizes the gradients of the loss with respect to each input token(word in this case). SmoothGrad (Smilkov et al., 2017) averages the gradient by adding Gaussian noise to the input. Layerwise Relevance Propagation (LRP) (Binder et al., 2016) , DeepLift (Shrikumar et al., 2017) are similar methods used for this purpose.", "cite_spans": [ { "start": 536, "end": 559, "text": "(Simonyan et al., 2013)", "ref_id": "BIBREF18" }, { "start": 661, "end": 683, "text": "(Smilkov et al., 2017)", "ref_id": "BIBREF20" }, { "start": 783, "end": 804, "text": "(Binder et al., 2016)", "ref_id": "BIBREF3" }, { "start": 816, "end": 840, "text": "(Shrikumar et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model Interpretability", "sec_num": "2" }, { "text": "Gradients (IG) (Sundararajan et al., 2017) satisfies the necessary axioms, we use it for the purpose of interpretability. IG computes attributions for the input features based on the network's predictions. These attributions assign credit/blame to the input features (pixels in case of an image and words in case of a question) which are responsible for the output of the model. These attributions can help identify when a model is accurate for the wrong reasons like over-reliance on images or possible language priors. These attributions are computed with respect to a baseline input. In this paper, we use an empty question as the baseline. We use these attributions which specify word importance in the input question to design adversarial questions, which the model fails to answer correctly. While doing so, we try to preserve the original meaning of the question and ensure the simplicity of the same. We design these questions manually by incorporating highly attributed content-free words in the original question,taking into consideration the free-formed conversational nature of the questions that any user of such a system might ask. By content-free, we refer to words that are context independent like prepositions (e.g., \"on\", \"in\"), determiners (e.g., \"this\", \"that\") and certain qualifiers (e.g., \"much\", \"many\") among others.", "cite_spans": [ { "start": 15, "end": 42, "text": "(Sundararajan et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model Interpretability", "sec_num": "2" }, { "text": "The main idea of adversarial attacks is to carefully perturb the input without making perceivable changes, in order to affect the prediction of the model. There has been significant research on adversarial attacks concerning images (Goodfellow et al., 2014; Madry et al., 2017) . These attacks exploit the oversensitivity of models towards changes in the input image. Sharma et al. 2018 study attention guided implementations of popular imagebased attacks on VQA models. Xu et al. 2018 discuss methods to generate targeted attacks to perturb input images in a multimodal setting. Ramakrishnan et al. 2018 observe that VQA models heavily rely on certain language priors to directly arrive at the answer irrespective of the image. They further develop a bias-reducing approach to improve performance. Kafle and Kanan 2017 study the response of VQA models towards various question categories to indicate the deficiencies in the datasets. Huang et al. 2019 analyze the robustness of VQA models on basic questions ranked on the basis of similarity by LASSO based optimization method. Finally, Mudrakarta et al. 2018 use attributions to determine word importance and leverage them to craft adversarial questions. We adapt their ideas to the conversational aspect of questions in VizWiz to better suit our task. In this paper we restrict ourselves to attacks in the language domain, i.e. we only perturb the input questions and analyze the network's response.", "cite_spans": [ { "start": 232, "end": 257, "text": "(Goodfellow et al., 2014;", "ref_id": "BIBREF5" }, { "start": 258, "end": 277, "text": "Madry et al., 2017)", "ref_id": "BIBREF12" }, { "start": 935, "end": 952, "text": "Huang et al. 2019", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "The VizWiz dataset (Gurari et al., 2018) consists of 20,523 training set image-question pairs and 4,319 validation pairs (Bhattacharya and Gurari, 2019) . Whereas the VQA v2 dataset (Goyal et al., 2017) consists of 443,757 training questions and 214,354 validation questions. The VizWiz dataset is significantly smaller than other VQA datasets and hence is not ideal to determine word importance for the content free words. In order to do justice to these words and to keep the analysis generalizable we use the VQA v2 dataset for computing text attributions. We use the Counter model (Zhang et al., 2018) for the purpose of computing attributions. This model is structurally similar to the Q+I+A (Kazemi and Elqursh, 2017) (which was used to benchmark on VizWiz). We select this model for ease in reproducibility and for consistency with the original paper (Gurari et al., 2018) . We compute attributions over the validation set, of which the highly attributed words are selected to design prefix and suffix phrases which can be incorporated in original questions for adversarial effect.Further we verify and test these attacks on the following models : (1) Pythia (Singh et al., 2019 ) (the VizWiz 2018 challenge winner) pretrained on VQA v2 and transferred to VizWiz (train split) and (2) Q+I+A model (which was used to benchmark on VizWiz) trained from scratch on VizWiz (train split).", "cite_spans": [ { "start": 19, "end": 40, "text": "(Gurari et al., 2018)", "ref_id": "BIBREF7" }, { "start": 121, "end": 152, "text": "(Bhattacharya and Gurari, 2019)", "ref_id": "BIBREF2" }, { "start": 182, "end": 202, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF6" }, { "start": 585, "end": 605, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF23" }, { "start": 858, "end": 879, "text": "(Gurari et al., 2018)", "ref_id": "BIBREF7" }, { "start": 1166, "end": 1185, "text": "(Singh et al., 2019", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Model and Data Specifications", "sec_num": "5.1" }, { "text": "We compute the total attribution that every word receives as well as average attribution for every word based on it's frequency of occurrence. We only take into account content free words, with the intention of preserving the meaning of the original question when these words are added to it. We observe that among the content-free words, 'what', 'many', 'is' 'this', 'how' consistently receive high attribution in a question. We use these words along with some other context independent words to de- sign the attacks. We use these words to create seemingly natural phrases to be prepended or appended to the question. We observe that the model alters it's prediction under the influence of these added words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations", "sec_num": "5.2" }, { "text": "We present Suffix Attacks, wherein we append content free phrases to the end of each question and evaluate the strength of these attacks through the accuracy obtained by the model on validation set and the percentage of answers it predicts as unanswerable/unsuitable (U).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Suffix Attacks", "sec_num": "5.3" }, { "text": "We expand the Prefix attacks of Mudrakarta et al. 2018 in a conversational vein to suit our task. These are seen to be more effective as prefix allows us to add important words like 'What' and 'How' to the start of a question which confuses the model to a greater extent than suffix attacks. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prefix Attacks", "sec_num": "5.4" }, { "text": "The Pythia v3 (Singh et al., 2019) model achieves an accuracy of 53% while the Q+I+A model achieves 48.8% when evaluated on clean samples from the val-set. We tabulate the results obtained by using these phrases as prefixes and suffixes. It is worth noting that when tested on empty questions (which is the baseline for our task) Pythia retains an accuracy of 35.43% while Q+I+A retains 38.35%. Thus our strongest attacks which are meaningful combinations of the basic attacks(in bold; see Table 1 for Pythia) and (in bold; see Table 3 for Q+I+A) drop the model's accuracy close to the empty question lower bound. Our strongest attack ( see Table 1 ) renders 97% of the questions unanswerable, which is a significant increase from 58% when evaluated on clean questions.", "cite_spans": [ { "start": 14, "end": 34, "text": "(Singh et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 490, "end": 497, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 528, "end": 535, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 641, "end": 648, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation and Analysis", "sec_num": "5.5" }, { "text": "6 Performance on other attacks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Analysis", "sec_num": "5.5" }, { "text": "We observe that when we evaluate the model by substituting certain words of the input question by low-attributed words, which change the meaning of the question, the answer predicted in most cases Pythia v0.3 (Singh et al., is 'unanswerable'. This means that the model does not over-rely on images and is robust in this aspect.", "cite_spans": [ { "start": 209, "end": 223, "text": "(Singh et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Word Substitution", "sec_num": "6.1" }, { "text": "We follow the approach of Feng et al. 2018 to iteratively remove less important words from the input question. With the removal of around 50% words from a question, the accuracy drops close to 46% and renders 72% of the questions unanswerable. The Pythia model is fairly robust in this sense too, as it's output becomes 'unanswerable' after considerable input reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Reduction", "sec_num": "6.2" }, { "text": "To evaluate the effect of absurd attacks on these models, we make a short, non-exhaustive list of objects that do not appear in the validation set of VizWiz(questions, answers and captions) but are present in the training set. We use these objects to form questions similar to the training set questions which contained these objects. A good model should be able to detect absurd questions. For absurd questions like \"which country's flag is this ?\" (where \"flag\" does not occur in the validation set of VizWiz) Pythia predicts over 90% of these (clean image)-(absurd question) pairs as 'unanswerable' which is the desired outcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Absurd Questions", "sec_num": "6.3" }, { "text": "We analyzed two popular VQA models trained under different circumstances for robustness. Our analysis was driven by textual attributions, which helped identify shortcomings of the current approaches to solve a real world problem. The attacks discussed in this paper, illuminate the need for achieving robustness to scale up better to the task of visual assistance. To improve accessibility for the visually impaired, these VQA systems must be interpretable and safe for operation even under adverse conditions arising out of conversational variations. We believe these insights can be useful to surmount this challenging task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Integrated Gradients (IG)Vanilla, LRP and DeepLift violate the axioms of Sensitivity and Implementational Invariance as discussed bySundararajan et al. 2017. As Integrated", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Analyzing the behavior of visual question answering models", "authors": [ { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1955--1960", "other_ids": { "DOI": [ "10.18653/v1/D16-1203" ] }, "num": null, "urls": [], "raw_text": "Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question an- swering models. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1955-1960, Austin, Texas. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "VQA: Visual Question Answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Vizwiz dataset browser: A tool for visualizing machine learning datasets", "authors": [ { "first": "Nilavra", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Danna", "middle": [], "last": "Gurari", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.09336" ] }, "num": null, "urls": [], "raw_text": "Nilavra Bhattacharya and Danna Gurari. 2019. Vizwiz dataset browser: A tool for visualizing machine learning datasets. arXiv preprint arXiv:1912.09336.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Layer-wise relevance propagation for neural networks with local renormalization layers", "authors": [ { "first": "Alexander", "middle": [], "last": "Binder", "suffix": "" }, { "first": "Gr\u00e9goire", "middle": [], "last": "Montavon", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Lapuschkin", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" } ], "year": 2016, "venue": "International Conference on Artificial Neural Networks", "volume": "", "issue": "", "pages": "63--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Binder, Gr\u00e9goire Montavon, Sebastian Lapuschkin, Klaus-Robert M\u00fcller, and Wojciech Samek. 2016. Layer-wise relevance propagation for neural networks with local renormalization layers. In International Conference on Artificial Neural Net- works, pages 63-71. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pathologies of neural models make interpretations difficult", "authors": [ { "first": "Eric", "middle": [], "last": "Shi Feng", "suffix": "" }, { "first": "", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Grissom", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07781" ] }, "num": null, "urls": [], "raw_text": "Shi Feng, Eric Wallace, II Grissom, Mohit Iyyer, Pe- dro Rodriguez, Jordan Boyd-Graber, et al. 2018. Pathologies of neural models make interpretations difficult. arXiv preprint arXiv:1804.07781.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Explaining and harnessing adversarial examples", "authors": [ { "first": "J", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "", "middle": [], "last": "Szegedy", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6572" ] }, "num": null, "urls": [], "raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Confer- ence on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Vizwiz grand challenge: Answering visual questions from blind people", "authors": [ { "first": "Danna", "middle": [], "last": "Gurari", "suffix": "" }, { "first": "Qing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Abigale", "middle": [ "J" ], "last": "Stangl", "suffix": "" }, { "first": "Anhong", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Chi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Kristen", "middle": [], "last": "Grauman", "suffix": "" }, { "first": "Jiebo", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Jeffrey", "middle": [ "P" ], "last": "Bigham", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. 2018. Vizwiz grand challenge: Answering visual questions from blind people. CVPR.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A novel framework for robustness analysis of visual qa models", "authors": [ { "first": "Jia-Hong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Cuong Duc", "middle": [], "last": "Dao", "suffix": "" }, { "first": "Modar", "middle": [], "last": "Alfadly", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Ghanem", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "8449--8456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jia-Hong Huang, Cuong Duc Dao, Modar Alfadly, and Bernard Ghanem. 2019. A novel framework for ro- bustness analysis of visual qa models. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, volume 33, pages 8449-8456.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An analysis of visual question answering algorithms", "authors": [ { "first": "Kushal", "middle": [], "last": "Kafle", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Kanan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "1965--1973", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kushal Kafle and Christopher Kanan. 2017. An analy- sis of visual question answering algorithms. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1965-1973.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Show, ask, attend, and answer: A strong baseline for visual question answering", "authors": [ { "first": "Vahid", "middle": [], "last": "Kazemi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Elqursh", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vahid Kazemi and Ali Elqursh. 2017. Show, ask, at- tend, and answer: A strong baseline for visual ques- tion answering.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards deep learning models resistant to adversarial attacks", "authors": [ { "first": "Aleksander", "middle": [], "last": "Madry", "suffix": "" }, { "first": "Aleksandar", "middle": [], "last": "Makelov", "suffix": "" }, { "first": "Ludwig", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Tsipras", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Vladu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.06083" ] }, "num": null, "urls": [], "raw_text": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversar- ial attacks. arXiv preprint arXiv:1706.06083.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A multiworld approach to question answering about realworld scenes based on uncertain input", "authors": [ { "first": "Mateusz", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Fritz", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "27", "issue": "", "pages": "1682--1690", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mateusz Malinowski and Mario Fritz. 2014. A multi- world approach to question answering about real- world scenes based on uncertain input. In Advances in Neural Information Processing Systems 27, pages 1682-1690. Curran Associates, Inc.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Association for Computational Linguistics", "authors": [ { "first": "Ankur", "middle": [], "last": "Pramod Kaushik Mudrakarta", "suffix": "" }, { "first": "Mukund", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "", "middle": [], "last": "Dhamdhere", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1896--1906", "other_ids": { "DOI": [ "10.18653/v1/P18-1176" ] }, "num": null, "urls": [], "raw_text": "Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1896-1906, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Overcoming language priors in visual question answering with adversarial regularization", "authors": [ { "first": "Aishwarya", "middle": [], "last": "Sainandan Ramakrishnan", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1541--1551", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regular- ization. In Advances in Neural Information Process- ing Systems, pages 1541-1551.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Attend and attack : Attention guided adversarial attacks on visual question answering models", "authors": [ { "first": "Vasu", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Ankita", "middle": [], "last": "Kalra", "suffix": "" }, { "first": "Sumedha", "middle": [], "last": "Vaibhav", "suffix": "" }, { "first": "Labhesh", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Louis-Phillippe", "middle": [], "last": "Patel", "suffix": "" }, { "first": "", "middle": [], "last": "Morency", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasu Sharma, Ankita Kalra, Vaibhav, Sumedha Chaud- hary, Labhesh Patel, and Louis-Phillippe Morency. 2018. Attend and attack : Attention guided adver- sarial attacks on visual question answering models.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning important features through propagating activation differences", "authors": [ { "first": "Avanti", "middle": [], "last": "Shrikumar", "suffix": "" }, { "first": "Peyton", "middle": [], "last": "Greenside", "suffix": "" }, { "first": "Anshul", "middle": [], "last": "Kundaje", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3145--3153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3145-3153. JMLR. org.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Vedaldi", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1312.6034" ] }, "num": null, "urls": [], "raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pythia-a platform for vision & language research", "authors": [ { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Natarajan", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Meet", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2018, "venue": "SysML Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amanpreet Singh, Vivek Natarajan, Yu Jiang, Xinlei Chen, Meet Shah, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2019. Pythia-a platform for vision & language research. In SysML Workshop, NeurIPS, volume 2018.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Smoothgrad: removing noise by adding noise", "authors": [ { "first": "Daniel", "middle": [], "last": "Smilkov", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03825" ] }, "num": null, "urls": [], "raw_text": "Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\u00e9gas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Axiomatic attribution for deep networks", "authors": [ { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Qiqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3319--3328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning -Volume 70, ICML'17, page 3319-3328. JMLR.org.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Fooling vision and language models despite localization and attention mechanism", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xinyun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rohrbach", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4951--4961", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, and Dawn Song. 2018. Fooling vi- sion and language models despite localization and attention mechanism. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4951-4961.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning to count objects in natural images for visual question answering", "authors": [ { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Hare", "suffix": "" }, { "first": "Adam Pr\u00fcgel-", "middle": [], "last": "Bennett", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Zhang, Jonathon Hare, and Adam Pr\u00fcgel-Bennett. 2018. Learning to count objects in natural images for visual question answering. In International Con- ference on Learning Representations.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Visual7w: Grounded question answering in images", "authors": [ { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "4995--5004", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei- Fei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4995-5004.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Attributions overlaid on the corresponding input words. The output of the model changes from 'yellow' to 1 which is driven by the word 'many'.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "The output of the model is driven by the word 'answer' acting as an adversary.", "type_str": "figure" }, "TABREF2": { "content": "
Pythia v0.3 (Singh et al., 2019)
Suffix PhraseAccuracy % U
guide me on this49.869.2
answer this for me48.8275.19
answer this for me-45.382.47
in not a lot of words
answer this for me-42.588.46
in not many words
", "text": "Prefix attacks on Pythia v0.3", "type_str": "table", "num": null, "html": null }, "TABREF3": { "content": "
: Suffix attacks on Pythia v0.3
Q+I+A (Kazemi and Elqursh, 2017)
Suffix PhraseAccuracy % U
describe this for me43.5282.8
answer this for me43.9089.7
guide me on this41.3187.0
answer this for me-40.191.13
in not a lot of words
answer this for me-38.4494.1
in not many words
", "text": "", "type_str": "table", "num": null, "html": null }, "TABREF4": { "content": "
Q+I+A (Kazemi and Elqursh, 2017)
Prefix PhraseAccuracy % U
describe this for me46.7276.8
answer this for me45.9079.8
what is the answer to44.7280.6
in not many words44.5081.4
answer this for me-42.181.13
in not many words
", "text": "Suffix attacks on Q+I+A", "type_str": "table", "num": null, "html": null }, "TABREF5": { "content": "", "text": "Prefix attacks on Q+I+A", "type_str": "table", "num": null, "html": null } } } }