{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:49.511396Z" }, "title": "Contextual explanation rules for neural clinical classifiers", "authors": [ { "first": "Madhumita", "middle": [], "last": "Sushil", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" }, { "first": "Simon", "middle": [], "last": "\u0160uster", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Melbourne", "location": {} }, "email": "simon.suster@unimelb.edu.au" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Several previous studies on explanation for recurrent neural networks focus on approaches that find the most important input segments for a network as its explanations. In that case, the manner in which these input segments combine with each other to form an explanatory pattern remains unknown. To overcome this, some previous work tries to find patterns (called rules) in the data that explain neural outputs. However, their explanations are often insensitive to model parameters, which limits the scalability of text explanations. To overcome these limitations, we propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams. For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets. We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Several previous studies on explanation for recurrent neural networks focus on approaches that find the most important input segments for a network as its explanations. In that case, the manner in which these input segments combine with each other to form an explanatory pattern remains unknown. To overcome this, some previous work tries to find patterns (called rules) in the data that explain neural outputs. However, their explanations are often insensitive to model parameters, which limits the scalability of text explanations. To overcome these limitations, we propose a pipeline to explain RNNs by means of decision lists (also called rules) over skipgrams. For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets. We find that our technique persistently achieves high explanation fidelity and qualitatively interpretable rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding and explaining decisions of complex models such as neural networks has recently gained a lot of attention for engendering trust in these models, improving them, and understanding them better (Montavon et al., 2018; Belinkov and Glass, 2019) . Several previous studies developing interpretability techniques provide a set of input features or segments that are the most salient for the model output. Approaches such as input perturbation and gradient computation are popular for this purpose (Ancona et al., 2018; Arras et al., 2019) . A drawback of these approaches is the lack of information about interaction between different features. While heatmaps (Li et al., 2016b,a; Arras et al., 2017) and partial dependence plots (Lundberg and Lee, 2017) are popularly used, they only provide a qualitative view which quickly * Research conducted while at CLiPS. gets complex as the number of features increases. To overcome this limitation, rule induction for model interpretability has become popular, which accounts for interactions between multiple features and output classes (Lakkaraju et al., 2017; Puri et al., 2017; Ming et al., 2018; Ribeiro et al., 2018; Sushil et al., 2018; Evans et al., 2019; Pastor and Baralis, 2019) . Most of these work treat the explained models as black boxes, and fit a separate interpretable model on the original input to find rules that mimic the output of the explained model. However, because the interpretable model does not have information about the parameters of the complex model, global explanation is expensive, and the explaining and explained models could fit different curves to arrive to the same output. Sushil et al. (2018) incorporates model gradients in the explanation process to overcome these challenges, but this technique cannot be used with current stateof-the-art models that use word embeddings due to their reliance on interpretable model input in the form of bag-of-words. Murdoch and Szlam (2017) explain long short term memory networks (LSTMs) (Hochreiter and Schmidhuber, 1997) by means of ngram rules, but their rules are limited to presence of single ngrams and do not capture interaction between ngrams in text. To learn explanation rules for RNNs while overcoming the limitations of the previous approaches, we have the following contributions in the paper: 1. We induce explanation rules over important skipgrams in text, while ensuring that these rules generalize to unseen data. To this end, we quantify skipgram importance in LSTMs by first pooling gradients across embedding dimensions to compute word importance, and thereby aggregating them into skipgram importance. Skipgrams incorporate word order in explanations and increase interpretability.", "cite_spans": [ { "start": 205, "end": 228, "text": "(Montavon et al., 2018;", "ref_id": "BIBREF23" }, { "start": 229, "end": 254, "text": "Belinkov and Glass, 2019)", "ref_id": "BIBREF6" }, { "start": 505, "end": 526, "text": "(Ancona et al., 2018;", "ref_id": "BIBREF2" }, { "start": 527, "end": 546, "text": "Arras et al., 2019)", "ref_id": "BIBREF4" }, { "start": 668, "end": 688, "text": "(Li et al., 2016b,a;", "ref_id": null }, { "start": 689, "end": 708, "text": "Arras et al., 2017)", "ref_id": "BIBREF3" }, { "start": 1089, "end": 1113, "text": "(Lakkaraju et al., 2017;", "ref_id": "BIBREF17" }, { "start": 1114, "end": 1132, "text": "Puri et al., 2017;", "ref_id": "BIBREF28" }, { "start": 1133, "end": 1151, "text": "Ming et al., 2018;", "ref_id": "BIBREF22" }, { "start": 1152, "end": 1173, "text": "Ribeiro et al., 2018;", "ref_id": "BIBREF29" }, { "start": 1174, "end": 1194, "text": "Sushil et al., 2018;", "ref_id": "BIBREF33" }, { "start": 1195, "end": 1214, "text": "Evans et al., 2019;", "ref_id": "BIBREF9" }, { "start": 1215, "end": 1240, "text": "Pastor and Baralis, 2019)", "ref_id": "BIBREF25" }, { "start": 1666, "end": 1686, "text": "Sushil et al. (2018)", "ref_id": "BIBREF33" }, { "start": 2021, "end": 2055, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. To overcome existing limitations with au-tomated explanation evaluation (Lertvittayakumjorn and Toni, 2019; Poerner et al., 2018) , we provide a synthetic clinical text classification dataset for evaluating interpretability techniques. We construct this dataset according to existing medical knowledge and clinical corpus. We validate our explanation pipeline on this synthetic dataset by recovering the labeling rules of the dataset. We then apply our pipeline to two clinical datasets for sepsis classification, and one dataset for sentiment analysis. We confirm that the explanation results obtained on synthetic data are scalable to real corpora.", "cite_spans": [ { "start": 75, "end": 110, "text": "(Lertvittayakumjorn and Toni, 2019;", "ref_id": "BIBREF18" }, { "start": 111, "end": 132, "text": "Poerner et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a method to find decision lists as explanation rules for RNNs with word embedding input. We quantify word importance in an RNN by comparing multiple pooling operations (qualitatively and quantitatively). After establishing a desired pooling technique, we move to finding importance of skipgrams, which provides larger context around words in explanations. We then find decision lists that associate the relative importance of multiple skipgrams in the RNN to an output class. This is an extension of our prior work (Sushil et al., 2018) where we find if-then-else rules for feedforward neural networks. However, the previous approach relies on using interpretable inputs independent of word order and is not scalable to the current stateof-the-art approaches that use word embeddings instead. Moreover, explanation of binary classifiers is not supported by that pipeline, and the explanation rules are not generalized to unseen examples. Furthermore, the previous explanation rules are hierarchical, and hence cannot be understood independently without parsing the entire rule hierarchy. In the proposed research, we address all these limitations and extend the explanations to binary cases, unseen data, and to sequential neural networks with word embedding input. Additionally, these explanation rules can be understood as an independent decision path. We present the complete pipeline for our approach, which we name UNRAVEL, in Figure 1. Code for the paper is available on https: //github.com/clips/rnn_expl_rules.", "cite_spans": [ { "start": 526, "end": 547, "text": "(Sushil et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 1443, "end": 1449, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Explanation pipeline", "sec_num": "2" }, { "text": "Saliency (importance) scores of input features are often computed as gradients of the predicted out- put node w.r.t. all the input nodes for all the instances (Simonyan et al., 2013; Adebayo et al., 2018) . In neural architectures that have an embedding layer, interpretable input features are replaced by corresponding low-dimensional embeddings. Due to this, we obtain different saliency scores for different embedding dimensions of a word in a document. Because embedding dimensions are not interpretable, it is difficult to understand what these multiple saliency scores mean. To instead obtain a single score for a word by combining saliency values of all the dimensions, we consider the following commonly used pooling techniques:", "cite_spans": [ { "start": 159, "end": 182, "text": "(Simonyan et al., 2013;", "ref_id": "BIBREF30" }, { "start": 183, "end": 204, "text": "Adebayo et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "\u2022 L2 norm of the gradient scores (Bansal et al., 2016; Hechtlinger, 2016; Poerner et al., 2018) .", "cite_spans": [ { "start": 33, "end": 54, "text": "(Bansal et al., 2016;", "ref_id": "BIBREF5" }, { "start": 55, "end": 73, "text": "Hechtlinger, 2016;", "ref_id": "BIBREF12" }, { "start": 74, "end": 95, "text": "Poerner et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "saliency L2 = \u03a3 dim grad 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "\u2022 Sum of gradients across all the dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "saliency sum = \u03a3 dim grad", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "\u2022 Dot product between the embeddings and the gradient scores (Denil et al., 2014; Montavon et al., 2018; Arras et al., 2019) . This additionally accounts for the embedding value itself.", "cite_spans": [ { "start": 61, "end": 81, "text": "(Denil et al., 2014;", "ref_id": "BIBREF8" }, { "start": 82, "end": 104, "text": "Montavon et al., 2018;", "ref_id": "BIBREF23" }, { "start": 105, "end": 124, "text": "Arras et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "saliency dot = \u03a3 dim (emb grad)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "We also experimented with max pooling, but we omit the discussion here because they have the same patterns as the L2 norm, albeit with higher magnitudes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "In Section 4.1, we analyze the importance scores obtained with these techniques qualitatively and quantitatively to identify the preferred one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word importance computation", "sec_num": "2.1" }, { "text": "One of the contributions of this work is to find explanation rules for sequential models such as RNNs. Conjunctive clauses of if-then-else rules are order independent although this order is critical for RNNs. To account for word order in input documents, some previous approaches find the most important ngrams instead of the top words only (Murdoch and Szlam, 2017; Jacovi et al., 2018) . To incorporate word order also in explanation rules, we compute the importance of subsequences in the documents before combining different subsequences into conjunctive rules. We define importance of a subsequence as the mean saliency of all the tokens in that subsequence. We represent subsequences as skipgrams with length in the range [1, 4] and with maximum two skip tokens 1 . After computing the scores, we retain the 50 most important skipgrams for every document (based on absolute importance scores). The number of unique skipgrams obtained in this manner is very high. To limit the complexity of explanations, we retain 5k skipgrams with the highest total absolute importance score across the entire training set and learn explanation rules over these. To this end, we create a bag-of-skipgramimportance representation of the documents, where the vocabulary corresponds to the 5k most important skipgrams across the training set. For ease of understanding, we discretize the importance scores of the skipgrams to represent five different levels of importance: {\u2212\u2212, \u2212, 0, +, ++}. Here \u2212\u2212 and ++ represent a high negative and positive importance, respectively, for the predicted output class, 0 means that the skipgram is absent in the document, and \u2212 and + indicate low negative and positive importance scores, respectively. This skipgram set, along with the output predictions of a model, is then input to a rule induction module to obtain decision lists as explanations.", "cite_spans": [ { "start": 354, "end": 366, "text": "Szlam, 2017;", "ref_id": "BIBREF24" }, { "start": 367, "end": 387, "text": "Jacovi et al., 2018)", "ref_id": "BIBREF14" }, { "start": 728, "end": 731, "text": "[1,", "ref_id": null }, { "start": 732, "end": 734, "text": "4]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Skipgrams to incorporate context", "sec_num": "2.2" }, { "text": "1 Length and skip values in skipgrams were manually decided to include sufficient context while limiting complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skipgrams to incorporate context", "sec_num": "2.2" }, { "text": "As these values increase further, the phrases become more sparse, resulting into a larger explanation vocabulary. Feature selection step hence selects a smaller proportion of phrases to retain the same computational complexity, which can limit the explanation coverage/recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Skipgrams to incorporate context", "sec_num": "2.2" }, { "text": "In the prediction phase, a model merely applies the knowledge it has learned from the training data. Hence, an explanation technique should not require prior knowledge of the test set to find global explanations of a model. We hypothesize that explanation rules should be consistently accurate between the training data and the predictions on unseen data. In accordance to this hypothesis, instead of learning explanations directly from validation or test instances, which is common in interpretability research (Ribeiro et al., 2018; Sushil et al., 2018) , we modify the explanation procedure to learn accurate, transferable explanations only from the training set. We first feed the training data to our neural network and record the corresponding output predictions. These output predictions, combined with the corresponding set of top discretized skipgrams, are used to fit the rule inducer. The hyperparameters of the rule inducer are optimized to best explain the validation set outputs. Finally, we report a score that quantifies how well the learned rules transfer to the test predictions. This training scheme ensures that the explanations are generalizable to unseen data, instead of overfitting the test set.", "cite_spans": [ { "start": 512, "end": 534, "text": "(Ribeiro et al., 2018;", "ref_id": "BIBREF29" }, { "start": 535, "end": 555, "text": "Sushil et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Learning transferable explanations", "sec_num": "2.3" }, { "text": "We obtain decision lists using PART (Frank and Witten, 1998) , which finds simplified paths of partial C4.5 decision trees. These decision lists can be comprehended independent of the order, and support both binary and multi-class cases.", "cite_spans": [ { "start": 36, "end": 60, "text": "(Frank and Witten, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Learning transferable explanations", "sec_num": "2.3" }, { "text": "A big challenge for interpretability research is the evaluation of the results (Lertvittayakumjorn and Toni, 2019) . Human evaluation is not ideal because a model can learn correct classification patterns that are counter-intuitive for humans (Poerner et al., 2018) . In complex domains like healthcare, such an evaluation is additionally infeasible. To overcome existing limitations with automated evaluation of explanations, we create a synthetic binary clinical document classification dataset. We base the dataset construction on the sepsis screening guidelines 2 . This is a critical task for preventing deaths in ICUs (Futoma et al., 2017) and new insights about the problem are important in the medical domain. The synthetic dataset includes a subset of sentences from the freely available clinical corpus MIMIC-III (Johnson et al., 2016) . Dataset construction process is described here:", "cite_spans": [ { "start": 79, "end": 114, "text": "(Lertvittayakumjorn and Toni, 2019)", "ref_id": "BIBREF18" }, { "start": 243, "end": 265, "text": "(Poerner et al., 2018)", "ref_id": "BIBREF27" }, { "start": 624, "end": 645, "text": "(Futoma et al., 2017)", "ref_id": "BIBREF11" }, { "start": 813, "end": 845, "text": "MIMIC-III (Johnson et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "\u2022 From the MIMIC-III corpus, we sample 3-15 words long sentences that mention the keywords discussed in the screening guidelines, grouped into the following sets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "1. I: Contains sentences that mention these infection-related keywords: {pneumonia and 3 empyema, meningitis, endocarditis, infection}. 2. Inf l: Contains sentences that mention these inflammation-related keywords: {hypothermia or 4 hyperthermia, leukocytosis or leukopenia, altered mental status, tachycardia, tachypnea, hyperglycemia}. 3. Others: Sentences that do not mention any of the previously stated keywords: Sentence / \u2208 {I \u222a Inf l}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "\u2022 We populate 50k documents with 17 sentences each by randomly sampling one sentence from set I, one sentence for each comma-separated term in set Inf l, and 10 sentences from set Others. We additionally populate 20k documents with 17 sentences, all from set Others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "\u2022 We then run the CLAMP clinical NLP pipeline (Soysal et al., 2017) to identify if these keywords are negated in the documents.", "cite_spans": [ { "start": 46, "end": 67, "text": "(Soysal et al., 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "\u2022 Next, we assign class labels to the documents using the following rule:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "if the infection term sampled from set I is not negated and at least 2 responses sampled from set Inf l are not negated =\u21d2 Class label is septic, Class label is non-septic otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "49% of the documents are thus labeled as septic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "Sampling sentences from the MIMIC-III corpus introduces language diversity through a large vocabulary and varied sentence structures. Use of an imperfect tool to identify negation for document labeling also adds noise to the dataset. These properties are desirable because they allow for controlled explanation evaluation while also simulating real world corpora and tasks, unlike several synthetic datasets used for explanation evaluation (Arras et al., 2019; Chrupala and Alishahi, 2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic dataset", "sec_num": "3.1" }, { "text": "For every document, the set of words that are used to assign it a class label includes all the keyword terms about infection from set I that are mentioned in that document, keyword terms about inflammatory response from set Inf l, and their corresponding negation markers as identified by the CLAMP pipeline. We mark these sets of terms, one set per document, as the gold set of important terms for this task. For example in the document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold important terms", "sec_num": "3.1.1" }, { "text": "No signs of infection were found. Altered mental status exists. Patient is suffering from hypothermia, the set of gold terms would include all the underlined words. Among these words, infection, altered, mental, status, and hypothermia are keyword terms, and no, signs, and of are terms corresponding to the negation scope.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold important terms", "sec_num": "3.1.1" }, { "text": "We split the dataset into subsets of 80-10-10% as training-validation-test sets. We obtain a vocabulary of 47,015 tokens after lower-casing the documents without removing punctuation. We replace unknown words in validation and test sets with the unk token. We train LSTM classifiers to predict the document class from the hidden representation after the final timestep, which is obtained after processing the entire document as a sequence of tokens 5 . The classifiers use randomly initialized word embeddings and a single RNN layer without attention. The hidden state size and embedding dimension are set to either 50 or 100. We use the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001 and a batch size of 64 (without hyperparameter optimization). Classification performance is shown in Table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model:", "sec_num": "3.1.2" }, { "text": "We additionally find explanation rules for sepsis classifiers on the MIMIC-III clinical corpus. We define sepsis label as all the cases where patients are assigned one of the following diagnostic codes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "\u2022 995.91 (Sepsis): Two or more systemic inflammatory response criteria plus a known or suspected infection. 2% of the cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "\u2022 995.92 (Severe Sepsis): Sepsis with acute organ dysfunction. 3% of the cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "\u2022 785.52 (Septic Shock): Form of severe sepsis where the organ dysfunction involves the cardiovascular system. 4% of the cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "We analyze two different setups after removing blank notes and the notes marked as error in the MIMIC-III corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "1. We use the last discharge note for every patient to classify whether the patient has sepsis. Class distribution among 58,028 instances is 90-10% for non-septic and septic cases respectively, and the vocabulary size is 229,799. The task is easier in this setup because 70% of septic cases mention sepsis directly, whereas only 13% of non-septic cases mention sepsis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "2. We classify whether a patient has a sepsis diagnosis or not using the last note about a patient excluding the categories discharge notes, social work, rehab services and nutrition. We obtain 52,691 patients in this manner, out of which only 9% are septic. The vocabulary size is 87,753. In this setup, only 17% of septic cases mention sepsis, as opposed to 6% of non-septic cases mentioning sepsis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Real clinical datasets", "sec_num": "3.2" }, { "text": "We train 2-layer bidirectional LSTM classifiers with 100 dimensional randomly initialized word embeddings and 100 dimensional hidden layer. We train for 50 epochs with early stopping with patience 5. The remaining data processing and implementation details are the same as discussed for synthetic dataset. Macro F1 score of classification when using discharge notes is 0.68 (septic class F1 is 0.41), and without using discharge notes is 0.60 (septic class F1 is 0.27). Majority baseline is 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models:", "sec_num": "3.2.1" }, { "text": "Following Murdoch and Szlam (2017), we explain LSTM classifiers initialized with 300 dimensional Glove (Pennington et al., 2014) embeddings and 150 hidden nodes for binary sentiment classification on the Stanford sentiment analysis (SST2) dataset (Socher et al., 2013) . We obtain 84.13% classification accuracy, and our vocabulary size is 13,983.", "cite_spans": [ { "start": 103, "end": 128, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF26" }, { "start": 247, "end": 268, "text": "(Socher et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Sentiment analysis", "sec_num": "3.3" }, { "text": "Several existing approaches for global rule-based interpretability (Lakkaraju et al., 2017; Puri et al., 2017) have one common aspect-they directly use the original input to find explanation rules for complex classifiers without making use of the parameters of the complex models. However, these approaches don't scale to NLP tasks due to combinatorial computational complexity in finding explanation rules. For comparison, as baseline rules, we induce explanations directly from the input data without using gradients of neural models. To this end, we create a bag-of-skipgrams by binarizing the most frequent skipgrams to represent whether they are present in a document. We then train rule induction classifiers on this binarized skipgram data to explain neural outputs. We also compare to Anchors (Ribeiro et al., 2018) for SST2 explanations by implementing their submodular pick algorithm for obtaining global explanations. Anchors does not scale to longer documents used for sepsis classification.", "cite_spans": [ { "start": 67, "end": 91, "text": "(Lakkaraju et al., 2017;", "ref_id": "BIBREF17" }, { "start": 92, "end": 110, "text": "Puri et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline explanation rules", "sec_num": "3.4" }, { "text": "We record fidelity scores of the explanation rules on the test set, and the complexity of these explanations. Fidelity scores refer to how faithful the explanations are to the test output predictions of the explained neural network. Like our prior work (Sushil et al., 2018) , we use macro F1-measure of explanations compared to original predictions to quantify it. We define explanation complexity as the number of rules in an explanation.", "cite_spans": [ { "start": 253, "end": 274, "text": "(Sushil et al., 2018)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation metrics", "sec_num": "3.5" }, { "text": "To compare different pooling techniques described in Section 2.1, we evaluate sets of most important words obtained by different techniques against gold sets of important terms for the documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing pooling techniques", "sec_num": "4.1" }, { "text": "In Figure 2 , we compare word importance distribution for the pooling techniques for an instance in the validation set of the synthetic corpus. The L2 norm provides distributions over the positive values only and the importance scores are low because it squares the gradients. Sum pooling and dot product instead return a distribution over both positive and negative values, with dot product returning a more peaked distribution. However, as we can see, sum and dot product often provide opposite importance signs for the same words. This is caused due to presence of word embeddings while computing dot product, which can take both positive and negative values. In this instance, both true and predicted classes are non-septic. Looking at Figure 2c , we find positive peaks over negative and infection, and negative peaks over altered mental status and hyperglycemia. This corresponds to the class labeling rule in the synthetic data, where non-septic class is assigned when infection terms are negated. These directions of influence are counter-intuitive for sum pooling in Figure 2b . Due to its intuitive, peaked importance distributions, dot product seems to be better than other techniques. However, we move to quantitative evaluation for a global perspective because this qualitative analysis is biased towards a specific instance and model.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 740, "end": 749, "text": "Figure 2c", "ref_id": "FIGREF1" }, { "start": 1076, "end": 1085, "text": "Figure 2b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "4.1.1" }, { "text": "We find the top k tokens for test documents in the synthetic dataset by ranking absolute word importance scores, where k is the number of gold important terms used to label the document. We ignore the 20k documents that only consist of sentences that do not mention any keyword term, and hence have an empty gold set. We compute the accuracy of the set of most important words for every document compared to their corresponding gold set. Later, we take a mean across all the documents and report it in Table 1 . We find that dot product consistently recovers more important tokens than other pooling techniques across all the classifiers, confirming the qualitative analysis earlier and the findings of Arras et al. (2019) . Hence we use dot product for computing word importance before inducing explanation rules.", "cite_spans": [ { "start": 703, "end": 722, "text": "Arras et al. (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 502, "end": 509, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Quantitative analysis", "sec_num": "4.1.2" }, { "text": "We additionally see that the mean accuracy is nearly twice for the classifier with 50 hidden nodes and 100 dimensional word embeddings as compared to the the larger classifier that uses 100 hidden units instead, although the latter classifier is nearly 5% more accurate. This suggests that the larger network obtains higher performance by focusing on tokens that are not incorporated within the gold keywords. The reason behind different tokens being considered important could be that our gold set of important terms is noisy:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative analysis", "sec_num": "4.1.2" }, { "text": "\u2022 Some tokens such as punctuation symbols are missing from the gold set, although they are important for identifying the scope of negation, as seen in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 159, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Quantitative analysis", "sec_num": "4.1.2" }, { "text": "\u2022 Some terms in the gold set are not required for correct classification. For example: 1. Too many words are included as negation triggers. For example, in the sentence no signs of infection were found., 'no', 'signs', and 'of' are all added to the gold set as negation markers although the subset {'no', 'infection'} may be sufficient. 2. Similarly, the keyword altered mental status could already be recognized from a subset of these terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative analysis", "sec_num": "4.1.2" }, { "text": "We obtain explanations of all the LSTM classifiers for the synthetic dataset. We record fidelity scores of explanations and the corresponding complexity in Table 2 . We find that when we use the proposed pipeline UNRAVEL for learning gradient-informed rules, we obtain explanations with high fidelity scores also on the test data. On the other hand, with the baseline approach, we obtain nearly 15% lower fidelity scores. In addition, explanations are more complex with the baseline approach. This confirms that making use of model parameters by means of gradients acts as an additional useful cue for the rule-based explainability module, thus resulting in more faithful explanations. We present some examples of explanation rules for the most accurate LSTM classifier for the synthetic dataset in Figure 3 . Here, we indicate infection keywords that were used to populate the dataset with a single underline, and the inflammatory response keywords with a double underline. The first rule in the figure indicates that if two inflammatory response criteria are highly important for the network, the term infection is highly important, and phrases negating the presence of infection ", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 2", "ref_id": null }, { "start": 799, "end": 807, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Explaining synthetic data classifiers", "sec_num": "4.2" }, { "text": "Eval type LSTM100,E100 LSTM100,E50 LSTM50,E100 LSTM50,E50 Table 2 : Test set fidelity scores of explanations (in %macro-F1), and number of explanation rules as the measure of explanation complexity for different LSTM classifiers on the synthetic dataset using our approach compared to the baseline approach. LSTMx,Ey refers to LSTM with x hidden nodes and y dimensional word embeddings. sg in parenthesis refers to skipgram-based explanations.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 65, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Explanation", "sec_num": null }, { "text": "(a) if hyperglycemia = ++ AND to exclude = 0 AND evidence infection . = 0 AND infection = ++ AND no infection .= 0 AND no infection = 0 AND negative infection = 0 AND or of infection = 0 AND fungal infection other = 0 AND of infection in the = 0 AND altered = ++ =\u21d2 septic (17466/17466) (b) if tachypnea = 0 AND meningitis = 0 AND urinary tract = 0 AND endocarditis = 0 AND hyperglycemia = 0 =\u21d2 non-septic (16015/16015) (c) if no = ++ AND urinary = 0 AND bacterial = 0 AND mental = \u2212 =\u21d2 non-septic (1277/1345) Figure 3 : Example explanation rules for the best LSTM classifier on the synthetic dataset. Infection keywords from set I are marked with a single underline, and the corresponding inflammatory response keywords from set Inf l are marked with double underline. ++ refers to high positive importance of a term, 0 represents absence of a term, and \u2212 means that the term gets a low negative importance, i.e., presence of the term reduces the output probability. The numbers (a/b) mean that b training instances are explained by the rule, of which a are correct. The first two rules are obtained with skipgrams, and the third one is obtained on using only unigrams for explanations. are absent, then the class is recognized as septic. This is similar to the rule we have used to label the synthetic dataset, which requires at least one infection term and at least two inflammatory response criteria to not be negated in the document for being assigned a septic class. In the next rule-applied after all the cases from the previous rule have been excluded from the dataset-if several keyword terms are absent, the document is classified as non-septic. It is useful to remember that urinary tract is usually followed by the word infection in the dataset, and several instances mentioning infection have already been explained by the previous rule and hence have been ignored by this rule. This explanation rule is also in accordance to the synthetic dataset, where 20k documents do not contain any keyword term and are labeled as non-septic. The third rule is an example rule for the same model when explanations are based on unigrams only as opposed to skipgrams. In this case, we lose the context of the negation marker no. When using skipgrams, this context of negation is available, which makes the negation scope clearer. Further, terms like evidence, fungal and urinary tract captured by skipgrams provide additional context for understanding the rules. This illustrates that even though the fidelity scores of explanations are similar, skipgram based explanations are more interpretable than only unigram-based explanations. Hence, we use skipgrams for further analysis.", "cite_spans": [], "ref_spans": [ { "start": 510, "end": 518, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Explanation", "sec_num": null }, { "text": "We rerun our explainability pipeline on both clinical models for sepsis classification-with and without using discharge notes (Section 3.2). For the first classifier with discharge notes, we again obtain very high fidelity scores of explanations (Table 3) .", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 255, "text": "(Table 3)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Explaining clinical models", "sec_num": "4.3" }, { "text": "if sepsis major surgical = ++ =\u21d2 septic (209/209) if complaint : sepsis = 0 AND chief hypotension major = ++ =\u21d2 septic (169/169) Figure 4 : First two explanation rules for the clinical dataset that uses discharge notes to classify sepsis. ++ refers to high positive importance, and 0 refers to an absent term in the document. (a/b) in parentheses show that a of b examples explained by this rule are correct.", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Explaining clinical models", "sec_num": "4.3" }, { "text": "The baseline explanations have significantly lower fidelity scores while also being extremely complex. On inspecting the corresponding explanation rules given in Figure 4 , we find that they refer to the direct mentions of sepsis in the discharge notes. In the first rule, if sepsis major surgical is mentioned, the class is directly septic. In the second condition, it first rules out the mention of a complaint of sepsis and then checks for additional conditions. This confirms that not only does the classifier pick up on these direct mentions, but the explanations also recover this information. This illustrates the utility of UNRAVEL in understanding our models, which is the first step towards improving them. For example, if our model is learning direct mentions of sepsis as a discriminating feature, we could remove these direct mentions from the dataset before training new models to ensure that they generalize. Next, for the more difficult case where we use only the final non-discharge note about patients to classify whether they have sepsis, the fidelity score is 77.33%. Although this score is good as an absolute number, it is much lower than other two cases. Explanations for this model are also much more complex. This highlights that more complex classifiers and explanations have lower explanation fidelity. While manually inspecting these explanations, we find that absence of terms such as diagnosis : sepsis, indication endocarditis . valve, indication bacteremia, admitting diagnosis fever and pyelonephritis are used to rule out sepsis. These are similar to the explanations of the other two datasets, albeit enriched with information about additional infections and body conditions. This confirms that the synthetic dataset closely models a real clinical use case, and suggests that these explanations rules could result into useful hypothesis generation.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Explaining clinical models", "sec_num": "4.3" }, { "text": "Results of the SST2 explanations are given in Table 3. Our pipeline provides \u223c10% more accurate explanations compared to Anchors. Moreover, on", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Explaining sentiment classifier", "sec_num": "4.4" }, { "text": "We have successfully developed a pipeline to obtain transferable, accurate gradient-informed explanation rules from RNNs. We have constructed a synthetic dataset to qualitatively and quantitatively evaluate the results, and we obtain informative explanations with high fidelity scores. We obtain similar results on clinical datasets and sentiment analysis. Our approach is transferable to all similar neural models. In future, it would be interesting to extend the capabilities of this approach to obtain more accurate, less complex and scalable explanations for classifiers with more complex patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5" }, { "text": "https://bit.ly/3575e3d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Sentences mentioning both the keywords are sampled.4 Sentences mentioning either of the keywords are sampled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not experiment with other types of classifiers because the focus of the work is to find and evaluate explanation rules for sequential models that use word embeddings as input, as opposed to comparing different classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Their implementation is not openly available for direct comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was carried out within the Accumulate strategic basic research project, funded by the government agency Flanders Innovation & Entrepreneurship (VLAIO) [grant number 150056]. It also received funding from the Flemish Government (AI Research Program). We would like to thank all the anonymous reviewers of this paper whose useful comments have ensured that it is in a better state now compared to the original draft.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "if ? = 0 AND bad . = 0 AND too = ++ AND one = 0 =\u21d2 negative (159/159) if ? = ++ =\u21d2 negative (81/82) if bad . = 0 AND worst = 0 AND fails = 0 AND feels = ++ =\u21d2 negative (54/54) if bad . = 0 AND worst = 0 AND fails = 0 AND is bad = 0 AND flat = 0 AND mess = 0 AND stupid = 0 AND suffers = 0 AND pointless = 0 AND dull = ++ =\u21d2 negative (38/38) if bad . = ++ =\u21d2 negative (36/36) inspecting the explanation rules for our method and Anchors respectively presented in Figures 5 and 6 , we find that Anchors rules consist only of single words, as opposed to UNRAVEL, which finds conjunctions of different phrases. Furthermore, explanation rules with UNRAVEL obtain 71% classification accuracy on the original task. This performance drop compared to LSTM is \u223c7% lower than gradient decomposition-based performance drop reported by Murdoch and Szlam (2017), although the numbers aren't strictly comparable because we explain different classifiers 6 .", "cite_spans": [ { "start": 60, "end": 69, "text": "(159/159)", "ref_id": null }, { "start": 92, "end": 99, "text": "(81/82)", "ref_id": null }, { "start": 333, "end": 340, "text": "(38/38)", "ref_id": null } ], "ref_spans": [ { "start": 461, "end": 477, "text": "Figures 5 and 6", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sanity Checks for Saliency Maps", "authors": [ { "first": "Julius", "middle": [], "last": "Adebayo", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Gilmer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Muelly", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Moritz", "middle": [], "last": "Hardt", "suffix": "" }, { "first": "Been", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NIPS'18", "volume": "", "issue": "", "pages": "9525--9536", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. In Proceedings of the 32Nd International Conference on Neural Infor- mation Processing Systems, NIPS'18, pages 9525- 9536, USA. Curran Associates Inc.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop", "authors": [ { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Chrupala", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afra Alishahi, Grzegorz Chrupala, and Tal Linzen. 2019. Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Work- shop. CoRR, abs/1904.04063.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Towards better understanding of gradient-based attribution methods for Deep Neural Networks", "authors": [ { "first": "Marco", "middle": [], "last": "Ancona", "suffix": "" }, { "first": "Enea", "middle": [], "last": "Ceolini", "suffix": "" }, { "first": "Cengiz", "middle": [], "last": "\u00d6ztireli", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Gross", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Ancona, Enea Ceolini, Cengiz \u00d6ztireli, and Markus Gross. 2018. Towards better understanding of gradient-based attribution methods for Deep Neu- ral Networks. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "What is relevant in a text document?\": An interpretable machine learning approach", "authors": [ { "first": "Leila", "middle": [], "last": "Arras", "suffix": "" }, { "first": "Franziska", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Gr\u00e9goire", "middle": [], "last": "Montavon", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" } ], "year": 2017, "venue": "PloS one", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leila Arras, Franziska Horn, Gr\u00e9goire Montavon, Klaus-Robert M\u00fcller, and Wojciech Samek. 2017. \"What is relevant in a text document?\": An inter- pretable machine learning approach. In PloS one.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Evaluating Recurrent Neural Network Explanations", "authors": [ { "first": "Leila", "middle": [], "last": "Arras", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Osman", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "113--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leila Arras, Ahmed Osman, Klaus-Robert M\u00fcller, and Wojciech Samek. 2019. Evaluating Recurrent Neu- ral Network Explanations. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 113- 126, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ask the GRU: Multi-task Learning for Deep Text Recommendations", "authors": [ { "first": "Trapit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16", "volume": "", "issue": "", "pages": "107--114", "other_ids": { "DOI": [ "10.1145/2959100.2959180" ] }, "num": null, "urls": [], "raw_text": "Trapit Bansal, David Belanger, and Andrew McCallum. 2016. Ask the GRU: Multi-task Learning for Deep Text Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16, pages 107-114, New York, NY, USA. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analysis methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics, 7:49-72.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Correlating Neural and Symbolic Representations of Language", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Chrupala", "suffix": "" }, { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "2952--2962", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Chrupala and Afra Alishahi. 2019. Correlat- ing Neural and Symbolic Representations of Lan- guage. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 2952-2962.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extraction of Salient sentences from Labelled Documents", "authors": [ { "first": "Misha", "middle": [], "last": "Denil", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Demiraj", "suffix": "" }, { "first": "Nando", "middle": [], "last": "De Freitas", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Misha Denil, Alban Demiraj, and Nando de Freitas. 2014. Extraction of Salient sentences from Labelled Documents. Technical report, University of Oxford.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What's Inside the Black-box?: A Genetic Programming Method for Interpreting Complex Machine Learning Models", "authors": [ { "first": "P", "middle": [], "last": "Benjamin", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Mengjie", "middle": [], "last": "Xue", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '19", "volume": "", "issue": "", "pages": "1012--1020", "other_ids": { "DOI": [ "10.1145/3321707.3321726" ] }, "num": null, "urls": [], "raw_text": "Benjamin P. Evans, Bing Xue, and Mengjie Zhang. 2019. What's Inside the Black-box?: A Genetic Programming Method for Interpreting Complex Ma- chine Learning Models. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '19, pages 1012-1020, New York, NY, USA. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generating accurate rule sets without global optimization", "authors": [ { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "H", "middle": [], "last": "Ian", "suffix": "" }, { "first": "", "middle": [], "last": "Witten", "suffix": "" } ], "year": 1998, "venue": "Fifteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "144--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eibe Frank and Ian H. Witten. 1998. Generating ac- curate rule sets without global optimization. In Fif- teenth International Conference on Machine Learn- ing, pages 144-151. Morgan Kaufmann.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to detect sepsis with a multitask gaussian process RNN classifier", "authors": [ { "first": "Joseph", "middle": [], "last": "Futoma", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Katherine", "middle": [ "A" ], "last": "Heller", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1174--1182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Futoma, Sanjay Hariharan, and Katherine A. Heller. 2017. Learning to detect sepsis with a multi- task gaussian process RNN classifier. In Proceed- ings of the 34th International Conference on Ma- chine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1174-1182.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Interpretation of Prediction Models Using the Input Gradient", "authors": [ { "first": "Yotam", "middle": [], "last": "Hechtlinger", "suffix": "" } ], "year": 2016, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.07634" ] }, "num": null, "urls": [], "raw_text": "Yotam Hechtlinger. 2016. Interpretation of Prediction Models Using the Input Gradient. Computing Re- search Repository, arXiv:1611.07634.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Understanding Convolutional Neural Networks for Text Classification", "authors": [ { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Oren", "middle": [ "Sar" ], "last": "Shalom", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "56--65", "other_ids": { "DOI": [ "10.18653/v1/W18-5408" ] }, "num": null, "urls": [], "raw_text": "Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding Convolutional Neural Net- works for Text Classification. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 56-65, Brussels, Belgium. Association for Compu- tational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "MIMIC-III, a freely accessible critical care database", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Alistair", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "J", "middle": [], "last": "Tom", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Pollard", "suffix": "" }, { "first": "", "middle": [], "last": "Shen", "suffix": "" }, { "first": "H", "middle": [], "last": "Liwei", "suffix": "" }, { "first": "Mengling", "middle": [], "last": "Lehman", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Ghassemi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Moody", "suffix": "" }, { "first": "Leo", "middle": [ "Anthony" ], "last": "Szolovits", "suffix": "" }, { "first": "Roger G", "middle": [], "last": "Celi", "suffix": "" }, { "first": "", "middle": [], "last": "Mark", "suffix": "" } ], "year": 2016, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, Li- wei H Lehman, Mengling Feng, Mohammad Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific data, 3:.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Computing Re- search Repository, arXiv:1412.6980.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Interpretable & Explorable Approximations of Black Box Models. Workshop on Fairness, Accountability, and Transparency in Machine Learning", "authors": [ { "first": "Himabindu", "middle": [], "last": "Lakkaraju", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Kamar", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.01154" ] }, "num": null, "urls": [], "raw_text": "Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2017. Interpretable & Explorable Approximations of Black Box Models. Workshop on Fairness, Accountability, and Transparency in Machine Learning, KDD, arXiv:1707.01154.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Human-grounded Evaluations of explanation methods for text classification", "authors": [ { "first": "Piyawat", "middle": [], "last": "Lertvittayakumjorn", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2019, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.11355" ] }, "num": null, "urls": [], "raw_text": "Piyawat Lertvittayakumjorn and Francesca Toni. 2019. Human-grounded Evaluations of explanation meth- ods for text classification. Computing Research Repository, arXiv:1908.11355.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Visualizing and understanding neural models in NLP", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "681--691", "other_ids": { "DOI": [ "10.18653/v1/N16-1082" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016a. Visualizing and understanding neural mod- els in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681-691, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Understanding Neural Networks through Representation erasure", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1612.08220" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. Understanding Neural Networks through Represen- tation erasure. Computing Research Repository, arXiv:1612.08220.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A unified approach to interpreting model predictions", "authors": [ { "first": "M", "middle": [], "last": "Scott", "suffix": "" }, { "first": "Su-In", "middle": [], "last": "Lundberg", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4768--4777", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in Neural Information Processing Systems, pages 4768-4777.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Rulematrix: visualizing and understanding classifiers with rules", "authors": [ { "first": "Yao", "middle": [], "last": "Ming", "suffix": "" }, { "first": "Huamin", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Bertini", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on visualization and computer graphics", "volume": "25", "issue": "1", "pages": "342--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao Ming, Huamin Qu, and Enrico Bertini. 2018. Rulematrix: visualizing and understanding classi- fiers with rules. IEEE transactions on visualization and computer graphics, 25(1):342-352.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Methods for interpreting and understanding deep neural networks", "authors": [ { "first": "Gr\u00e9goire", "middle": [], "last": "Montavon", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Samek", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2018, "venue": "Digital Signal Processing", "volume": "73", "issue": "", "pages": "1--15", "other_ids": { "DOI": [ "10.1016/j.dsp.2017.10.011" ] }, "num": null, "urls": [], "raw_text": "Gr\u00e9goire Montavon, Wojciech Samek, and Klaus- Robert M\u00fcller. 2018. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 73:1 -15.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic Rule Extraction from Long Short Term Memory Networks", "authors": [ { "first": "W", "middle": [], "last": "", "suffix": "" }, { "first": "James", "middle": [], "last": "Murdoch", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. James Murdoch and Arthur Szlam. 2017. Auto- matic Rule Extraction from Long Short Term Mem- ory Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Explaining Black Box Models by Means of Local Rules", "authors": [ { "first": "Eliana", "middle": [], "last": "Pastor", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Baralis", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC '19", "volume": "", "issue": "", "pages": "510--517", "other_ids": { "DOI": [ "10.1145/3297280.3297328" ] }, "num": null, "urls": [], "raw_text": "Eliana Pastor and Elena Baralis. 2019. Explaining Black Box Models by Means of Local Rules. In Pro- ceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC '19, pages 510-517, New York, NY, USA. ACM.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement", "authors": [ { "first": "Nina", "middle": [], "last": "Poerner", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "340--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nina Poerner, Hinrich Sch\u00fctze, and Benjamin Roth. 2018. Evaluating neural network explanation meth- ods using hybrid documents and morphosyntactic agreement. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 340-350. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "MAGIX: Model Agnostic Globally Interpretable Explanations. Computing Research Repository", "authors": [ { "first": "Nikaash", "middle": [], "last": "Puri", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Pratiksha", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Sukriti", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Balaji", "middle": [], "last": "Krishnamurthy", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.07160" ] }, "num": null, "urls": [], "raw_text": "Nikaash Puri, Piyush Gupta, Pratiksha Agarwal, Sukriti Verma, and Balaji Krishnamurthy. 2017. MAGIX: Model Agnostic Globally Interpretable Explanations. Computing Research Repository, arXiv:1706.07160.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Anchors: High-Precision Model-Agnostic Explanations", "authors": [ { "first": "Sameer", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Singh", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2018, "venue": "AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-Precision Model- Agnostic Explanations. In AAAI Conference on Ar- tificial Intelligence (AAAI).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Vedaldi", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2013, "venue": "Computing Research Repository", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1312.6034" ] }, "num": null, "urls": [], "raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zis- serman. 2013. Deep inside convolutional net- works: Visualising image classification models and saliency maps. Computing Research Repository, arXiv:1312.6034.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "CLAMP -a toolkit for efficiently building customized clinical natural language processing pipelines", "authors": [ { "first": "Ergin", "middle": [], "last": "Soysal", "suffix": "" }, { "first": "Jingqi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Min", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Serguei", "middle": [], "last": "Pakhomov", "suffix": "" }, { "first": "Hongfang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2017, "venue": "Journal of the American Medical Informatics Association", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1093/jamia/ocx132" ] }, "num": null, "urls": [], "raw_text": "Ergin Soysal, Jingqi Wang, Min Jiang, Yonghui Wu, Serguei Pakhomov, Hongfang Liu, and Hua Xu. 2017. CLAMP -a toolkit for efficiently build- ing customized clinical natural language processing pipelines. Journal of the American Medical Infor- matics Association, page .", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Rule induction for global explanation of trained models", "authors": [ { "first": "Madhumita", "middle": [], "last": "Sushil", "suffix": "" }, { "first": "Simon", "middle": [], "last": "\u0160uster", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "82--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Madhumita Sushil, Simon \u0160uster, and Walter Daele- mans. 2018. Rule induction for global explana- tion of trained models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 82-97. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "The complete UNRAVEL pipeline for gradient-informed rule induction in recurrent neural networks. The underlined terms in point 4 refer to different important skipgrams in the text." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Heatmap visualization of word importance distribution for a single validation set instance in LSTM classifier with 50 hidden nodes and 100 dimensional word embeddings when L2, sum, and dot pooling techniques are used. Blue reflects positive importance and red indicates negative importance." }, "TABREF3": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
: Explanation fidelity (% macro F1) and com- |
plexity for sepsis classification: 1) With discharge |
notes 2) Without discharge notes, and on the SST2 |
dataset. The baseline method did not converge (in sev- |
eral weeks) for sepsis classification without discharge |
note and for SST2 classification. Anchors did not scale |
(in memory usage) to document-level sepsis datasets. |