{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:32.721391Z" }, "title": "Efficient Explanations from Empirical Explainers", "authors": [ { "first": "Robert", "middle": [], "last": "Schwarzenberg", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Nils", "middle": [], "last": "Feldhus", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sebastian", "middle": [], "last": "M\u00f6ller", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers. To this end, we propose feature attribution modelling with Empirical Explainers. Empirical Explainers learn from data to predict the attribution maps of expensive explainers. We train and test Empirical Explainers in the language domain and find that they model their expensive counterparts surprisingly well, at a fraction of the cost. They could thus mitigate the computational burden of neural explanations significantly, in applications that tolerate an approximation error.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Amid a discussion about Green AI in which we see explainability neglected, we explore the possibility to efficiently approximate computationally expensive explainers. To this end, we propose feature attribution modelling with Empirical Explainers. Empirical Explainers learn from data to predict the attribution maps of expensive explainers. We train and test Empirical Explainers in the language domain and find that they model their expensive counterparts surprisingly well, at a fraction of the cost. They could thus mitigate the computational burden of neural explanations significantly, in applications that tolerate an approximation error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, important works were published on the ecological impacts of artificial intelligence and deep learning in particular, e.g. Strubell et al. (2019) , Schwartz et al. (2020) , Henderson et al. (2020) . Research is focused on the energy hunger of model training and subsequent inference in production. Besides training and in-production inference, explainability has become an integral phase of many neural systems.", "cite_spans": [ { "start": 139, "end": 161, "text": "Strubell et al. (2019)", "ref_id": "BIBREF31" }, { "start": 164, "end": 186, "text": "Schwartz et al. (2020)", "ref_id": null }, { "start": 189, "end": 212, "text": "Henderson et al. (2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the ongoing discussion about Green AI we see explainability neglected. Conversely, in the explainability community, even though research on efficiency is an active area, apparently the discussion is currently shaped by other aspects, such as faithfulness and plausibility (Jacovi and Goldberg, 2020) . This is surprising because to explain a single model output, many prominent explanation methods, in particular many feature attribution methods (cf. below), require a multiple of computing power when compared to the prediction step.", "cite_spans": [ { "start": 275, "end": 302, "text": "(Jacovi and Goldberg, 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Take, for instance, the demonstrative but arguably realistic case of a classifier that was trained on 100k instances for 10 epochs. The training thus amounts to at least 1M forward passes and 1M backward passes. To produce explanations, in this paper, we consider feature attribution methods and focus on Integrated Gradients (IG) (Sundararajan et al., 2017) and Shapley Values (SV) (Castro et al., 2009) , which are popular and established but also computationally expensive. To compute the exact IG or SV is virtually intractable, which is why sampling-based approximations were devised. For IG", "cite_spans": [ { "start": 331, "end": 358, "text": "(Sundararajan et al., 2017)", "ref_id": "BIBREF33" }, { "start": 383, "end": 404, "text": "(Castro et al., 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "\u03c6 f,i (x) = x i \u2212x i s s k=1 \u2202f (x + k s (x \u2212x)) \u2202x i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "is computed, where x is the input to model f ,x is a user-defined baseline, s denotes the number of samples (a hyperparameter), and \u03c6 f,i (x) denotes the attribution score of feature i. For SV, s permutations of the input data O 1 , O 2 , . . . O s are drawn and then features from x are added to a user-defined baseline, 1 in the order they occur in the permutation. Let Pre i (O) denote the baseline including the features that were added to the baseline prior to i. The Shapley value can then be approximated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "\u03c6 f,i (x) = 1 s s k=1 f (Pre i (O k ) \u222a x i ) \u2212 f (Pre i (O k ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "Sundararajan et al. (2017) report that s between 20 and 300 is usually enough to approximate IG. Let us set s := 20. This requires 40 passes (forward and backward) through model f to explain a single instance in production and furthermore, after only 50k explanations the computational costs of training are also already surpassed. In the case of SV, again setting s := 20 and assuming only 512 input features (i.e. tokens to an NLP model), 1 There are several variants of Shapley Value Sampling. This sampling method is based on PyTorch's Captum (Kokhlikyan et al., 2020) library, that we also use for our experiments: https://captum.ai/api/shapley_ value_sampling.html, last accessed March 26, 2021. 1: Explanations (attribution maps) for a BERT-based sentiment classification (best viewed digitally). The input is taken from the test split and was classified into Positive. Top: Integrated Gradients (s = 20, 40 passes through classifier required). Bottom: Empirical Integrated Gradients (1 pass through Empirical Explainer required). Attribution scores were normalized on sequence level. Red: positive; blue: negative.", "cite_spans": [ { "start": 20, "end": 26, "text": "(2017)", "ref_id": null }, { "start": 441, "end": 442, "text": "1", "ref_id": null }, { "start": 547, "end": 572, "text": "(Kokhlikyan et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "one already needs to conduct 20 * 512 = 10240 passes to generate an input attribution map for a single classification decision. This means that SV surpasses the training costs specified above after only 195 explanations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "This may only have a small impact if the number of required explanations is low. However, there are strong indications that explainability will take (or retain) an important role in many neural systems: For example, there are legal regulations, such as the EU's GDPR which hints at a \"right to explanation\" (Goodman and Flaxman, 2017) . For such cases, a 1:1 ratio in production between model outputs and explanations is not improbable. If the employed explainability method requires more than one additional pass through the model (as many do, cf. below), there then is a tipping point at which the energy need of explanations exceeds the energy needs of both model training and in-production inference.", "cite_spans": [ { "start": 307, "end": 334, "text": "(Goodman and Flaxman, 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "IG and SV are not the only tipping point methods. Other expensive prominent and recent methods and variants are proposed by Zeiler and Fergus All of the above listed explainers require more than one additional pass through the model. This is why in general the following should hold across methods: The smaller the model, the greener the explanation. In terms of energy efficiency, explainability therefore benefits from model compression, distillation, or quantization. These are dynamic fields with a lot of active research which is why in the remainder of this paper we instead focus on something else: The mitigation of the ecological impact of tipping-point methods that dominate the cost term in the example cited in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "These are our main contributions in this paper:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "1. We propose to utilize the task of feature attribution modelling to efficiently model the attribution maps of expensive explainers. 2. We address feature attribution modelling with trainable explainers that we coin Empirical Explainers. 3. We evaluate their performance qualitatively and quantitatively in the language domain and establish them as an efficient alternative to computationally expensive explainers in applications where an approximation error is tolerable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation: Expensive Explainers", "sec_num": "1.1" }, { "text": "Informally, an EMPIRICAL EXPLAINER is a model that has learned from data to efficiently model the feature attribution maps of an expensive explainer. For training, one collects sufficiently many attribution maps from the expensive explainer and then maximizes the likelihood of these target attributions under the Empirical Explainer. An expensive explainer may, for instance, be a costly attribution method such as Integrated Gradients that is used to return attributions for the decisions of a classifier, say, a BERT-based (Devlin et al., 2019) sentiment classifier. The corresponding Empirical Explainer could be a separate neural network, similar in size to the sentiment classifier, consuming the same input tokens as the sentiment model, but instead of predicting the sentiment class, it is trained to predict the integrated gradients for each token.", "cite_spans": [ { "start": 526, "end": 547, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "Whereas the original Integrated Gradients explainer requires multiple passes through the classifier, producing the empirical integrated gradients Figure 2 : Explanations (attribution maps) for a BERT-based sentiment classification (best viewed digitally) with prominent approximation errors. The input is taken from the test split and was classified into Positive. Top: Integrated Gradients (s = 20, 40 passes through classifier required). Bottom: Empirical Integrated Gradients (1 pass through Empirical Explainer required). Attribution scores were normalized on sequence level. Red: positive; blue: negative. Note that contrary to the target explanation (top) the empirical integrated gradients for the token tormented are prominently negative (bottom).", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 154, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "requires just one pass through the similarly sized Empirical Explainer. Empirical explanations come with an accuracy-efficiency trade-off that we discuss in the course of a more formal definition of Empirical Explainers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "For the more formal definition, we need to fix notation first. Let E f : IR d \u2192 IR d be the expensive explainer that maps inputs onto attributions. Furthermore, let an Empirical Explainer be a function e \u03b8 : IR d \u2192 IR d , parametrized by \u03b8, which also returns attribution maps. Let || \u2022 || be a penalty for the inefficiency of a computation, e.g. a count of floating point operations, energy consumption or number of model passes needed. Furthermore, let us assume, without the loss of generality, that ||E f (x)|| >= ||e \u03b8 (x)|| always holds; i.e., the Empirical Explainer -which we develop and trainis never more inefficient than the original, expensive explainer. Let D :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "IR d \u00d7 IR d \u2192 [0, 1] be a similarity measure, where D(l, m) = 0 if l = m, for l, m \u2208 IR d and \u03b1, \u03b2 \u2208 [0, 1] with \u03b1 + \u03b2 = 1. For data X, we define an \u03b1-optimal Empirical Ex- plainer by the arg min \u03b8\u2208\u0398 1 |X| x\u2208X accuracy \u03b1D(E f (x), e \u03b8 (x)) + efficiency \u03b2 ||e \u03b8 (x)|| ||E f (x)|| .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework: Empirical Explainers", "sec_num": "2" }, { "text": "The first term describes how accurately the Empirical Explainer e \u03b8 models the expensive explainer E f . The second term compares the efficiency of the two explainers. For \u03b1 = 1, efficiency is considered unimportant and e \u03b8 := E f can be set to minimize Eq. 1. \u03b1 < 1 allows to optimize efficiency at the cost of accuracy, which brings about the trade off:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": "2.1" }, { "text": "One may not succeed in increasing efficiency while maintaining accuracy. In fact, there is generally no exact guarantee for how accurately e \u03b8 models E f for new data. Furthermore, while several expensive explainers, such as Integrated Gradients or Shapley Values, were developed axiomatically to have desirable properties, Empirical Explainers are derived from data -empirically. Consequently, the evidence and guarantees Empirical Explainers offer for their faithfulness to the downstream model are empirical in nature and upper-bound by the faithfulness of the expensive explainer used to train them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": "2.1" }, { "text": "We point this out explicitly because we would like to emphasize that we do not regard an Empirical Explainer a new explainability method, nor do we argue that it can be used to replace the original expensive explainer everywhere. There are certainly situations for which Empirical Explainers are unsuitable for any \u03b1 = 1; critical cases in which explanations must have guaranteed properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": "2.1" }, { "text": "Nevertheless, we still see a huge potential for Empirical Explainers where approximation errors are tolerable: Consider, for instance, a search engine powered by a neural model in the back-end. Without the need to employ the expensive explainer, Empirical Explainers can efficiently provide the user with clues about what the model probably considers relevant in their query (according to the expensive explainer).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Properties", "sec_num": "2.1" }, { "text": "In this section, we report on the performance of Empirical Explainers that we trained and tested in the language domain. We conducted tests with two prominent and expensive explainers, Integrated Gradients and Shapley Value Samples, varying the experiments across four state-of-the-art language classifiers, trained on four different tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The experiments address the question of whether or not it is feasible -in principle -to train efficient Empirical Explainers while achieving significant accuracy. All experiments, code, models and data are open source and can be retrieved following https://github.com/DFKI-NLP/emp-exp. The most important choices are documented in the following paragraphs. Before going into greater detail, it is noteworthy that Eq. 1 provides a theoretical framework which one does not have use directly for explicit optimization. For example, in this work, we address the first objective, accuracy, by fitting Empirical Explainers to the attribution maps of expensive explainers. However, the second objective, efficiency, is addressed implicitly by design, i.e. the Empirical Explainers, in contrast to their expensive counterparts, are designed (and trained) in a way s.t. only a single forward pass is required through a model similar in size to the downstream model. In the future, it would be very interesting to fully incorporate Eq. 1 in a differential setting, i.e. also optimize for efficiency automatically, rather than by manual design.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We trained four Empirical Explainers. All explainers consume only the input tokens to the downstream model and return an attribution score for each token. The first one (EmpExp-BERT-IG) was trained to predict integrated gradients w.r.t. the input tokens to a BERT-based IMDB movie review (Maas et al., 2011) classifier. For the second Empirical Explainer (EmpExp-XLNet-SV), we varied the downstream model architecture, task and target explainer: EmpExp-XLNet-SV predicts the Shapley Values (as returned by the expensive Shapley Value Sampling explainer) for the inputs of an XLNet-based (Yang et al., 2019) natural language inference classifier that was trained on the SNLI (Bowman et al., 2015) dataset. The third (EmpExp-RoBERTa-IG) and fourth (EmpExp-ELECTRA-SV) empirical explainers again approximate IG and SV, but for a RoBERTa-based news topic classifier trained on the AG News dataset (Zhang et al., 2015) and an ELECTRA (small)-based model (Clark et al., 2020) that detects paraphrases, trained on the PAWS dataset (subset \"labelled_final\") (Zhang et al., 2019) , respectively.", "cite_spans": [ { "start": 288, "end": 307, "text": "(Maas et al., 2011)", "ref_id": "BIBREF20" }, { "start": 587, "end": 606, "text": "(Yang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 893, "end": 913, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF36" }, { "start": 949, "end": 969, "text": "(Clark et al., 2020)", "ref_id": "BIBREF6" }, { "start": 1050, "end": 1070, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The Empirical Explainers were trained on target attributions that we generated with IG and SV with s := 20 samples. For EmpExp-RoBERTa-IG, we used s := 25 due to a slower convergence rate (cf. below). Explanations were generated for the output neuron with the maximal activation. EmpExp-BERT-IG was trained with early stopping using the IG attribution maps for the full IMDB train split (25k). EmpExp-XLNet-SV was trained with around 100k SV attribution maps for the SNLI train split with early stopping, for which we used the 10k attribution maps for the validation split. We did not use all training instances in the split for EmpExp-XLNet-SV, due to the computational costs of Shapley Value Sampling. In case of EmpExp-RoBERTa-IG and EmpExp-ELECTRA-SV it was possible to train with the full train splits again, 108k (12k held out) and around 50k instances, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "As mentioned above, the expensive explainers require user-defined baselines. For the baselines, we replaced all non-special tokens in the input sequence with pad tokens. For the expensive IG, we produced attribution maps for the embedding layer and projected the attribution scores onto tokens by summing them over the token dimension. For the expensive SV, the input IDs were perturbed. During perturbation, we grouped and treated special tokens (CLS, SEP, PAD, ...) in the original input as one feature to accelerate the computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In architectural terms, the Empirical Explainers are very similar to the downstream models: We heuristically decided to copy the fine-tuned BERT, XLNet, RoBERTa and ELECTRA encoders from the classifiers and instead of the classification layers on top, we initialized new fully connected layers with T output neurons. T was lower bound Figure 4 : Explanations (attribution maps) for an XLNet-based NLI classification with prominent approximation errors (best viewed digitally). The input is taken from the test split and was classified into Contradiction. Top: Shapley Value Samples (s = 20, 700 passes through classifier required). Bottom: Empirical Shapley Values (1 pass through Empirical Explainer required). Attribution scores were normalized on sequence level. Red: positive; blue: negative. Note that, contrary to the target explanation (top), the empirical Shapley Value for the token running is in the negative regime (bottom).", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 343, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "by the maximum input token sequence length to the downstream model in the respective dataset: T = 512 for BERT/IMDB and RoBERTa/AG News, T = 130 for XLNet/SNLI, and T = 145 for ELECTRA/PAWS. All input sequences were padded to T and we did not treat padding tokens different from other tokens, when training the Empirical Explainers. Please note that the sequence length has a considerable impact on the runtime of the SV explainer in particular, which is why limiting T increases comparability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We trained the Empirical Explainers to output the right (in accordance with the expensive explainers) attribution scores for the input tokens, using an MSE loss between E f (x 1 ) . . . E f (x T ) and e \u03b8 (x 1 ) . . . e \u03b8 (x T ) where x = x 1 , x 2 , . . . x T is a sequence of input tokens. 2 To put the performance of an Empirical Explainer into perspective, we propose the following baseline, which is the strongest we can think of: We take the original expensive explainer with a reduced number of samples as the baseline. To position the Empirical Explainer against this alternative energy saving strategy, we compute convergence curves. Starting with s = 1, we incrementally increase the number of samples until s = 19 (s = 24 in the case of EmpExp-RoBERTa-IG) and collect attribution maps from the expensive explainer for the different choices of s. We then compute the MSEs of these attribution maps when compared to the target attributions (with s = 20 or s = 25). We average the MSEs across the test split. The same is done for the Empirical Explainer.", "cite_spans": [ { "start": 292, "end": 293, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In the following, we report the experimental results, divided into the aspects of task performance, explanation efficiency and explanation accuracy. For SV with s = 20, assuming a token sequence length of 100 for the purpose of discussion, 2000 model passes are required. For the empirical explanations, only one (additional) forward pass through a similarly sized model (the Empirical Explainer) is necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "Contrary to runtime (and energy consumption) measures, the number of required model passes is largely invariant of available hardware and implementation details. For the sake of completeness, we nevertheless also report our runtimes in appendix A. In summary, generating the expensive explanations for the test splits took between around 02:15 and 48 hours, whereas the empirical explanations only required between 02:05 and 07:14 minutes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "The runtimes are not definitive, however. We were unable to establish a fair game for the explainers. For example, due to implementation details and memory issues we explained the data instancewise with the expensive explainers while our Empirical Explainers easily allowed batch processing. We expect that the expensive explainers can be accelerated but due to the larger number of model passes required, they will very likely not outperform their empirical counterparts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "The Empirical Explainers come with additional training costs, which we also report in appendix A. Training took between 02:15 and 07:00 hours. These additional training costs are thus quickly outweighed by the expensive explainers, in particular in a continuous in-production setting. Explanation Accuracy Regarding the accuracy term in Eq. 1, Figs. 1, 2, 3 and 4 provide anecdotal qualitative evidence that the Empirical Explainers are capable of modelling their expensive counterparts well, with varying degrees of approximation errors. Alongside this paper, we provide four files (see repository) with around 25k (IMDB), 10k (SNLI), 7.6k (AG News) and 8k (PAWS) lines, each of which contains an HTML document that depicts a target attribution and its empirical counterpart from the test set. The heatmaps in the figures mentioned above are taken from the accompanying files. Figs. 1 and 3 are instances of what we consider surprisingly accurate approximations of the expensive target attribution maps, despite challenging inputs. Let us first consider Fig. 1 in greater detail. Consider the tokens favorites and irritated that are not attributed much importance by the expensive explainer (IG, top) but could be considered signal words for the positive and negative class, respectively and thus pose a challenge for the Empirical Explainer. Nevertheless, in accordance with the expensive target explainer, the Empirical Explainer (bottom) does not attribute the classifier output primarily to these tokens but instead accurately assigns a lot of weight to It's overall pretty good.", "cite_spans": [], "ref_spans": [ { "start": 344, "end": 363, "text": "Figs. 1, 2, 3 and 4", "ref_id": "FIGREF2" }, { "start": 878, "end": 891, "text": "Figs. 1 and 3", "ref_id": "FIGREF2" }, { "start": 1055, "end": 1061, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "A similar phenomenon can be observed in Fig. 2 for the token love. The approximation in this figure, however, also contains a prominent approximation error. The Empirical Explainer erroneously attributes a salient negative score to the token tormented while the target explainer does not highlight that token. Similarly, in Fig. 4 the Empirical Explainer returns a negative score for running, whereas the expensive target explainer has returned a positive score.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 46, "text": "Fig. 2", "ref_id": null }, { "start": 324, "end": 330, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "We suspect that such errors may result from global priors that the Empirical Explainers have learned and that sometimes outweigh the instantaneous information. For instance, in Fig. 4 the verb running in the premise in conjunction with the (conjugated) verb runs in the hypothesis may statistically be indicative of an entailment in the training data. This is because to produce a contradiction the verb sometimes is simply replaced by another one (cf. Fig. 3 : surfing vs. sun bathing). In this instance, however, the verb is not replaced. Thus, here the prior knowledge of the Empirical Explainer may outweigh the local information in favor of the error that we observe: The Empirical Explainer may signal that running is evidence against the class Contradiction since it finds it in the premise and hypothesis. A similar argument can be put forward for the case of tormented in Fig. 2 .", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 183, "text": "Fig. 4", "ref_id": null }, { "start": 453, "end": 459, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 881, "end": 887, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "The above points are rather speculative. A more objective and quantitative analysis of the efficiency/accuracy trade-off is provided in Fig. 5 . The left column depicts the MSE lines of IG for an increasing number of samples in x-direction. We observe that IG converges fast in case of BERT/IMDB. (This may be due to saturation effects in Integrated Gradients, reported on by Miglani et al. (2020) .) In case of RoBERTa/AG News we found a slower convergence rate, which is why we increased the number of samples for the target explainer. We observe that in both cases, the empirical integrated gradients (dashed lines) perform favourably: To outperform the Empirical Explainer by decreasing s, in case of BERT/IMDB one needs to set s > 5 which entails 10 model passes as opposed to the single additional pass through the Empirical Explainer for the empirical explanations. Furthermore, the approximation error is already marginal at the intersection of expensive and empirical line. In case of RoBERTa/AG News, one even needs to set s := 18 to be closer to the target than the Empirical Explainer.", "cite_spans": [ { "start": 376, "end": 397, "text": "Miglani et al. (2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 136, "end": 142, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "A similar trend can be observed for the (empirical) Shapley Values in the right column of Fig. 5 . In case of XLNet/SNLI, however, the intersection occurs only after s = 10 which means that the Empirical Explainer needs only 1 11 * 100 = 0.9% of model passes (plus the pass through its output layer) when compared to the next best expensive explainer, again assuming 100 input tokens for the purpose of discussion. In case of ELEC-TRA/PAWS, the Empirical Explainer even beats the expensive explainer with just one sample less than the target.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 96, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "In summary, we take the experimental results as a strong indication that Empirical Explainers could become an efficient alternative to expensive explainers (in the language domain) where approximation errors are tolerable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": "4" }, { "text": "The computational burden of individual explainability methods was addressed in numerous works. As mentioned above, Integrated Gradients can only be computed exactly in limit cases and for all other cases, the community relies on the approximate method proposed by Sundararajan et al. (2017) . Similarly, Shapley Values can rarely be computed precisely which is why Shapley Value Sampling was investigated, e.g. by Castro et al. (2009) ; \u0160trumbelj and Kononenko (2010) . Shapley Value Sampling was later unified with other methods under the SHAP framework (Lundberg and Lee, 2017) which yielded the method Ker-nelSHAP that showed improved sample efficiency. Covert and Lee (2020) then analysed the convergence behaviour of KernelSHAP and again further improved runtime. introduced L-Shapley and C-Shapley which accelerate Shapley Value Sampling for structured data, such as dependency trees in NLP.", "cite_spans": [ { "start": 264, "end": 290, "text": "Sundararajan et al. (2017)", "ref_id": "BIBREF33" }, { "start": 414, "end": 434, "text": "Castro et al. (2009)", "ref_id": "BIBREF4" }, { "start": 437, "end": 467, "text": "\u0160trumbelj and Kononenko (2010)", "ref_id": "BIBREF32" }, { "start": 657, "end": 678, "text": "Covert and Lee (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Thus, computational feasibility appears to be a driving force in the research community, already. To the best of our knowledge, however, we are the first to propose the task of feature attribution modelling to mitigate the computational burden of expensive explainers with Empirical Explainers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Technically, self-explaining models (Alvarez-Melis and Jaakkola, 2018) are related to our approach in that they also generate explanations in a forward pass (alongside their classification deci-sion). Contrary to self-explaining models, Empirical Explainers can be employed after training for a variety of black box and white box classifiers and explainers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "A source of inspiration for our method was the work by Camburu et al. (2018) . The authors train self-explaining models that return a natural language rationale alongside their classification. Thus, they, too, train an explainer. However, their target explanations (natural language) differ substantially from the ones Empirical Explainers are trained with (attribution scores).", "cite_spans": [ { "start": 55, "end": 76, "text": "Camburu et al. (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Furthermore, related to our work is the technique of gradient matching for which a network's (integrated) gradients are compared to a target attribution, i.e. a human prior, and then the network's parameters are updated, s.t. the gradients move closer to the target, as done e.g. by Ross et al. 2017; Erion et al. (2019) ; Liu and Avci (2019) . Apart from the loss on an alignment with target attributions, our method and goals diverge from theirs significantly.", "cite_spans": [ { "start": 301, "end": 320, "text": "Erion et al. (2019)", "ref_id": "BIBREF10" }, { "start": 323, "end": 342, "text": "Liu and Avci (2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Human priors and expensive target explanations have recently also been used for explanatory interactive learning (XIL). Like Empirical Explainers, XIL is motivated by the expensiveness of a target explainer; in the case of XIL this is a human in the loop. Schramowski et al. (2020) present humans with informative (cf. active learning) instances, the model prediction and an explanation for the prediction The expensive human feedback is then used to improve the model. Apart from the expensive explainer assumption, their approach differs substantially from ours. Very recently, Behrens et al. (2021) contributed to XIL by introducing a method that learns to explain from explanations and in this respect is close to the setting of Empirical Explainers. One fundamental difference between ours and their work is that they propose and focus on a specific class of self-explainable models whereas Empirical Explainers make no assumptions about the underlying predictor and are intended for a variety of model classes, as already mentioned above.", "cite_spans": [ { "start": 256, "end": 281, "text": "Schramowski et al. (2020)", "ref_id": "BIBREF26" }, { "start": 580, "end": 601, "text": "Behrens et al. (2021)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Very recently again, Rajagopal et al. (2021) proposed local interpretable layers as a means to generate concept attributions which in parts aligns with our method, even though their target attributions and task objectives are very different again.", "cite_spans": [ { "start": 21, "end": 44, "text": "Rajagopal et al. (2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Lastly, Empirical Explainers can be viewed as a form of knowledge distillation (Hinton et al., 2015) . However, contrary to the established approach, we do not assume a parametric teacher network that knowledge is distilled from. Very recently we became aware of the work by Pruthi et al. (2020) who boost the accuracy of a student learner with explanations in the form of a subset of tokens that are relevant to the teacher decision, determined by an explainer. In a sequence classification task, the student is trained to identify the relevant tokens and could thus be considered an Empirical Explainer. The task, however, is not to predict the original attribution map and the overall objective differs significantly from ours again.", "cite_spans": [ { "start": 79, "end": 100, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF14" }, { "start": 275, "end": 295, "text": "Pruthi et al. (2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we take a step towards greener XAI by again reviving energy efficiency as an additional criterion by which to judge an explainability method, alongside important aspects such as faithfulness and plausibility. In this context, we propose feature attribution modelling with efficient Empirical Explainers. In the language domain, we investigate the efficiency/accuracy trade-off and find that it is possible to generate empirical explanations with significant accuracy, at a fraction of the costs of the expensive counterparts. We take this as a strong indication that Empirical Explainers could be a viable alternative to expensive explainers where approximation errors are tolerable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Directions", "sec_num": "6" }, { "text": "Regarding future directions: The Empirical Explainers we trained are our concrete model choices. The framework we propose allows for many other approaches. For instance, one could provide the Empirical Explainers with additional information, such as the gradient w.r.t. the inputs. This would require an additional pass through the model but may possibly further boost accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Directions", "sec_num": "6" }, { "text": "We would like to note that we trained and tested our Empirical Explainers only on in-domain data but their behaviour on out-of-domain data should be investigated, too. Fortunately, since we explain the model decision (the maximum output activation), no gold labels are required to train Empirical Explainers which facilitates data collection immensely. Finally, there are some more sample efficient explainers that should be considered, too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Directions", "sec_num": "6" }, { "text": "Even though we do not solve for Eq. 1 directly, please note that for evaluation we can normalize the attribution scores to [0, 1] prior to computing the MSE and this way force the MSE into the interval [0, 1] to comply with the constraints for Eq. 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Generating the expensive target explanations for the official IMDB test split (25k instances, T = 512, s = 25, BERT, Titan V) with Integrated Gradients took us 7:17 hours (6:22 hours for the 22500 training instances). Generating the expensive Shapley Values for the SNLI test split (\u223c 10k instances, T = 130, s = 20, XLNet, Quadro P5000) took us 48:22 hours (and over 600 GPU hours for under 100k training instances). It took us over 2:15 hours to explain the 7600 instances in the test split of the AG News dataset with IG (T = 512, s = 25, RoBERTa, RTX2080Ti; over 31 hours for the train split). For the PAWS test split (T = 145, s = 20, 8k instances, ELECTRA (small), RTX6000) we needed over 18 GPU hours (over 126 GPU hours for the 49401 instances in the train split, using NVIDIA's RTX3090 and RTX2080Ti).In contrast, generating the empirical explanations took us only 07:14 minutes for the IMDB test split on the Titan GPU and only 02:05 minutes for SNLI test split on the Quadro P5000 GPU. The AG News test split took 03:16 minutes to explain (RTX3090) and the PAWS test split was explained empirically in only 02:19 minutes (RTX3090).The training of EmpExp-BERT-IG terminated after 10 epochs (Titan V), which took less than 4 hours. EmpExp-XLNet-SV (Quadro P5000), EmpExp-RoBERTa-IG (RTX3090), and EmpExp-ELECTRA-SV (RTX3090) terminated after 7, 3, and 8 epochs, respectively (<7 hours, < 2:15 hours, and <1 hours).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Runtimes", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards robust interpretability with self-explaining neural networks", "authors": [ { "first": "David", "middle": [], "last": "Alvarez-Melis", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "31", "issue": "", "pages": "7775--7784", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Alvarez-Melis and Tommi Jaakkola. 2018. To- wards robust interpretability with self-explaining neural networks. Advances in Neural Information Processing Systems, 31:7775-7784.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bandits for learning to explain from explanations", "authors": [ { "first": "Freya", "middle": [], "last": "Behrens", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Teso", "suffix": "" }, { "first": "Davide", "middle": [], "last": "Mottin", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.03815" ] }, "num": null, "urls": [], "raw_text": "Freya Behrens, Stefano Teso, and Davide Mottin. 2021. Bandits for learning to explain from explanations. arXiv preprint arXiv:2102.03815.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "e-snli: Natural language inference with natural language explanations", "authors": [ { "first": "Oana-Maria", "middle": [], "last": "Camburu", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lukasiewicz", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "31", "issue": "", "pages": "9539--9549", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oana-Maria Camburu, Tim Rockt\u00e4schel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Nat- ural language inference with natural language expla- nations. Advances in Neural Information Process- ing Systems, 31:9539-9549.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Polynomial calculation of the shapley value based on sampling", "authors": [ { "first": "Javier", "middle": [], "last": "Castro", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "G\u00f3mez", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Tejada", "suffix": "" } ], "year": 2009, "venue": "Computers & Operations Research", "volume": "36", "issue": "5", "pages": "1726--1730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Javier Castro, Daniel G\u00f3mez, and Juan Tejada. 2009. Polynomial calculation of the shapley value based on sampling. Computers & Operations Research, 36(5):1726-1730.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "L-shapley and c-shapley: Efficient model interpretation for structured data", "authors": [ { "first": "Jianbo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Le", "middle": [], "last": "Song", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Michael I Jordan", "middle": [], "last": "Wainwright", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. 2019. L-shapley and c-shapley: Efficient model interpretation for structured data. International Conference on Learning Representa- tions 2019.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10555" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving kernelshap: Practical shapley value estimation via linear regression", "authors": [ { "first": "Ian", "middle": [], "last": "Covert", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.01536" ] }, "num": null, "urls": [], "raw_text": "Ian Covert and Su-In Lee. 2020. Improving kernelshap: Practical shapley value estimation via linear regres- sion. arXiv preprint arXiv:2012.01536.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The shapley taylor interaction index", "authors": [ { "first": "Kedar", "middle": [], "last": "Dhamdhere", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kedar Dhamdhere, Ashish Agarwal, and Mukund Sun- dararajan. 2020. The shapley taylor interaction in- dex. In ICML 2020.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning explainable models using attribution priors", "authors": [ { "first": "Gabriel", "middle": [], "last": "Erion", "suffix": "" }, { "first": "D", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Janizek", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Sturmfels", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.10670" ] }, "num": null, "urls": [], "raw_text": "Gabriel Erion, Joseph D Janizek, Pascal Sturmfels, Scott Lundberg, and Su-In Lee. 2019. Learning explainable models using attribution priors. arXiv preprint arXiv:1906.10670.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "European union regulations on algorithmic decision-making and a \"right to explanation", "authors": [ { "first": "Bryce", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Flaxman", "suffix": "" } ], "year": 2017, "venue": "AI magazine", "volume": "38", "issue": "3", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryce Goodman and Seth Flaxman. 2017. European union regulations on algorithmic decision-making and a \"right to explanation\". AI magazine, 38(3):50- 57.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Considering likelihood in nlp classification explanations with occlusion and language modeling", "authors": [ { "first": "David", "middle": [], "last": "Harbecke", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" } ], "year": 2020, "venue": "ACL 2020 SRW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Harbecke and Christoph Alt. 2020. Considering likelihood in nlp classification explanations with oc- clusion and language modeling. ACL 2020 SRW.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Towards the systematic reporting of the energy and carbon footprints of machine learning", "authors": [ { "first": "P", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Jie-Ru", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Romoff", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Brunskill", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Henderson, Jie-Ru Hu, Joshua Romoff, Emma Brun- skill, Dan Jurafsky, and Joelle Pineau. 2020. To- wards the systematic reporting of the energy and carbon footprints of machine learning. ArXiv, abs/2002.05651.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.02531" ] }, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness", "authors": [ { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Jacovi and Y. Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? ACL 2020.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Captum: A unified and generic model interpretability library for pytorch", "authors": [ { "first": "Narine", "middle": [], "last": "Kokhlikyan", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Miglani", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Alsallakh", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Reynolds", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Melnikov", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Kliushkina", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Araya", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.07896" ] }, "num": null, "urls": [], "raw_text": "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. 2020. Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Incorporating priors with feature attribution on text classification", "authors": [ { "first": "Frederick", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Besim", "middle": [], "last": "Avci", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6274--6283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederick Liu and Besim Avci. 2019. Incorporating pri- ors with feature attribution on text classification. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6274- 6283.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "4765--4774", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in neural information processing systems, pages 4765-4774.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning word vectors for sentiment analysis", "authors": [ { "first": "Andrew", "middle": [ "L" ], "last": "Maas", "suffix": "" }, { "first": "Raymond", "middle": [ "E" ], "last": "Daly", "suffix": "" }, { "first": "P", "middle": [ "T" ], "last": "Pham", "suffix": "" }, { "first": "D", "middle": [], "last": "Huang", "suffix": "" }, { "first": "A", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew L. Maas, Raymond E. Daly, P. T. Pham, D. Huang, A. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL 2011.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "2020. Investigating saturation effects in integrated gradients", "authors": [ { "first": "Vivek", "middle": [], "last": "Miglani", "suffix": "" }, { "first": "Narine", "middle": [], "last": "Kokhlikyan", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Alsallakh", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Orion", "middle": [], "last": "Reblitz-Richardson", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.12697" ] }, "num": null, "urls": [], "raw_text": "Vivek Miglani, Narine Kokhlikyan, Bilal Alsallakh, Miguel Martin, and Orion Reblitz-Richardson. 2020. Investigating saturation effects in integrated gradi- ents. arXiv preprint arXiv:2010.12697.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Evaluating explanations: How much do explanations from the teacher aid students?", "authors": [ { "first": "Danish", "middle": [], "last": "Pruthi", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Livio Baldini", "middle": [], "last": "Soares", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "C", "middle": [], "last": "Zachary", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Lipton", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Neubig", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.00893" ] }, "num": null, "urls": [], "raw_text": "Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares, Michael Collins, Zachary C Lipton, Graham Neubig, and William W Cohen. 2020. Evaluating explana- tions: How much do explanations from the teacher aid students? arXiv preprint arXiv:2012.00893.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Selfexplain: A selfexplaining architecture for neural text classifiers", "authors": [ { "first": "Dheeraj", "middle": [], "last": "Rajagopal", "suffix": "" }, { "first": "E", "middle": [], "last": "Vidhisha Balachandran", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2103.12279" ] }, "num": null, "urls": [], "raw_text": "Dheeraj Rajagopal, Vidhisha Balachandran, E. Hovy, and Yulia Tsvetkov. 2021. Selfexplain: A self- explaining architecture for neural text classifiers. arXiv preprint arXiv:2103.12279.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "why should i trust you?\" explaining the predictions of any classifier", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1135--1144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Right for the right reasons: Training differentiable models by constraining their explanations", "authors": [ { "first": "Andrew", "middle": [], "last": "Slavin Ross", "suffix": "" }, { "first": "C", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Finale", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "", "middle": [], "last": "Doshi-Velez", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. In IJCAI.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Making deep neural networks right for the right scientific reasons by interacting with their explanations", "authors": [ { "first": "Patrick", "middle": [], "last": "Schramowski", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Stammer", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Teso", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Brugger", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "Xiaoting", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Hans-Georg", "middle": [], "last": "Luigs", "suffix": "" }, { "first": "Anne-Katrin", "middle": [], "last": "Mahlein", "suffix": "" }, { "first": "Kristian", "middle": [], "last": "Kersting", "suffix": "" } ], "year": 2020, "venue": "Nature Machine Intelligence", "volume": "2", "issue": "8", "pages": "476--486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, Anne-Katrin Mahlein, and Kristian Kersting. 2020. Making deep neural net- works right for the right scientific reasons by inter- acting with their explanations. Nature Machine In- telligence, 2(8):476-486.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Restricting the flow: Information bottlenecks for attribution", "authors": [ { "first": "Karl", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Sixt", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Tombari", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Landgraf", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Information bottlenecks for attribution. ICLR 2020.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pattern-guided integrated gradients. ICML 2020 Workshop on Human Interpretability in Machine Learning (WHI)", "authors": [ { "first": "Robert", "middle": [], "last": "Schwarzenberg", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Castle", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.10685" ] }, "num": null, "urls": [], "raw_text": "Robert Schwarzenberg and Steffen Castle. 2020. Pattern-guided integrated gradients. ICML 2020 Workshop on Human Interpretability in Machine Learning (WHI) arXiv preprint arXiv:2007.10685.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Smoothgrad: removing noise by adding noise", "authors": [ { "first": "Daniel", "middle": [], "last": "Smilkov", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03825" ] }, "num": null, "urls": [], "raw_text": "Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\u00e9gas, and Martin Wattenberg. 2017. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Energy and policy considerations for deep learning in NLP", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Ananya", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3645--3650", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "An efficient explanation of individual classifications using game theory", "authors": [ { "first": "E", "middle": [], "last": "\u0160trumbelj", "suffix": "" }, { "first": "I", "middle": [], "last": "Kononenko", "suffix": "" } ], "year": 2010, "venue": "J. Mach. Learn. Res", "volume": "11", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. \u0160trumbelj and I. Kononenko. 2010. An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res., 11:1-18.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Axiomatic attribution for deep networks", "authors": [ { "first": "Mukund", "middle": [], "last": "Sundararajan", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Taly", "suffix": "" }, { "first": "Qiqi", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "70", "issue": "", "pages": "3319--3328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319-3328.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Yang, Zihang Dai, Yiming Yang, J. Carbonell, R. Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Visualizing and understanding convolutional networks", "authors": [ { "first": "D", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Zeiler", "suffix": "" }, { "first": "", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2014, "venue": "13th European Conference on Computer Vision", "volume": "", "issue": "", "pages": "818--833", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In 13th European Conference on Computer Vision, ECCV 2014, pages 818-833. Springer Verlag.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "28", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. Advances in neural information process- ing systems, 28:649-657.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Paws: Paraphrase adversaries from word scrambling", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.01130" ] }, "num": null, "urls": [], "raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019. Paws: Paraphrase adversaries from word scrambling. arXiv preprint arXiv:1904.01130.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Figure 1: Explanations (attribution maps) for a BERT-based sentiment classification (best viewed digitally). The input is taken from the test split and was classified into Positive. Top: Integrated Gradients (s = 20, 40 passes through classifier required). Bottom: Empirical Integrated Gradients (1 pass through Empirical Explainer required). Attribution scores were normalized on sequence level. Red: positive; blue: negative.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "(2014); Ribeiro et al. (2016); Lundberg and Lee (2017); Smilkov et al. (2017); Chen et al. (2019); Dhamdhere et al. (2020); Erion et al. (2019); Covert and Lee (2020); Schwarzenberg and Castle (2020); Schulz et al. (2020); Harbecke and Alt (2020).", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Explanations (attribution maps) for an XLNet-based NLI classification (best viewed digitally). The input is taken from the test split and was classified into Contradiction. Top: Shapley Value Samples (s = 20, 380 passes through classifier required). Bottom: Empirical Shapley Values (1 pass through Empirical Explainer required). Attribution scores were normalized on sequence level. Red: positive; blue: negative.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "Task Performance On the test splits, the classifiers we trained achieved weighted F 1 scores of 0.93 (BERT \u2022 IMDB), 0.90 (XLNet \u2022 SNLI), 0.94 (RoBERTa \u2022 AG News) and 0.92 (ELECTRA \u2022 PAWS). Explanation Efficiency Regarding the efficiency term in Eq. 1, in terms of model passes, the Empirical Explainers have a clear advantage over their expensive counterparts. For IG with s = 20, 40 model passes are required, for s = 25, 50 passes.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Performance of Empirical Explainers (dashed green lines) and convergence curves of expensive explainers (solid black lines), averaged across test sets. The attribution maps returned by the expensive explainers with s = 20 samples in case of BERT, XLNet and ELECTRA and s = 25 in case of RoBERTa (slower convergence behaviour), were regarded the target explanations. MSEs were computed on a per-sequence basis and then averaged across the test set.", "uris": null, "num": null } } } }