ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2020.blackboxnlp-1.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:49.706684Z"
},
"title": "diagNNose: A Library for Neural Activation Analysis",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Amsterdam",
"location": {}
},
"email": "j.w.d.jumelet@uva.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we introduce diagNNose, an open source library for analysing the activations of deep neural networks. diagNNose contains a wide array of interpretability techniques that provide fundamental insights into the inner workings of neural networks. We demonstrate the functionality of diagNNose with a case study on subjectverb agreement within language models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we introduce diagNNose, an open source library for analysing the activations of deep neural networks. diagNNose contains a wide array of interpretability techniques that provide fundamental insights into the inner workings of neural networks. We demonstrate the functionality of diagNNose with a case study on subjectverb agreement within language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We introduce diagNNose, an open source library for analysing deep neural networks. The diagNNose library allows researchers to gain better insights into the internal representations of such networks, providing a broad set of tools of state-ofthe-art analysis techniques. The library supports a wide range of model types, with a main focus on NLP architectures based on LSTMs (Hochreiter and Schmidhuber, 1997) and Transformers (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 427,
"end": 449,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Open-source libraries have been quintessential in the progress and democratisation of NLP. Popular packages include HuggingFace's transformers -allowing easy access to pretrained Transformer models; jiant (Pruksachatkun et al., 2020 ) -focusing on multitask and transfer learning within NLP; Captum (Kokhlikyan et al., 2020) -providing a range of feature attribution methods; and LIT (Tenney et al., 2020 ) -a platform for visualising and understanding model behaviour. We contribute to the open-source community by incorporating several interpretability techniques that have not been present in these packages.",
"cite_spans": [
{
"start": 205,
"end": 232,
"text": "(Pruksachatkun et al., 2020",
"ref_id": "BIBREF25"
},
{
"start": 299,
"end": 324,
"text": "(Kokhlikyan et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 384,
"end": 404,
"text": "(Tenney et al., 2020",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent years have seen a considerable interest in improving the understanding of how deep neural networks operate (Linzen et al., 2019) . The high-dimensional nature of these models makes it notoriously challenging to untangle their inner dynamics. This has given rise to a novel subfield within AI that focuses on interpretability, providing us a peak inside the black box. diagNNose aims to unify several of these techniques into one library, allowing interpretability research to be conducted in a more streamlined and accessible manner.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Linzen et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "diagNNose's main focus lies on techniques that aid in uncovering linguistic knowledge that is encoded within a model's representations. The library provides abstractions that allow recurrent models to be investigated in the same way as Transformer models, in a modular fashion. It contains an extensive activation extraction module that allows for the extraction of (intermediate) model activations on a corpus. The analysis techniques that are currently implemented include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Targeted syntactic evaluation tasks, such as those of Linzen et al. (2016) and Marvin and Linzen (2018) .",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF14"
},
{
"start": 81,
"end": 105,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Probing with diagnostic classifiers Adi et al., 2016) , and control tasks (Hewitt and Liang, 2019) .",
"cite_spans": [
{
"start": 38,
"end": 55,
"text": "Adi et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 76,
"end": 100,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Feature attributions that retrieve a feature's contribution to a model prediction (Lundberg and Lee, 2017; Murdoch et al., 2018) . Our implementation is model-agnostic, which means that any type of model architecture can be explained by it.",
"cite_spans": [
{
"start": 84,
"end": 108,
"text": "(Lundberg and Lee, 2017;",
"ref_id": "BIBREF17"
},
{
"start": 109,
"end": 130,
"text": "Murdoch et al., 2018)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present both an overview of the library, as well as a case study on subjectverb agreement within language models. We first present a brief overview of interpretability within NLP and a background to the analysis techniques that are part of the library (Section 2). We then provide an overview of diagNNose and expand briefly on its individual modules (Section 3). We conclude with a case study on subject-verb agreement, demonstrating several of diagNNose's features in an experimental setup (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The increasing capacities of language models (and deep learning in general) have led to a rich field of research that aims to gain a better understanding of how these models operate. Approaches in this research area are often interdisciplinary in nature, borrowing concepts from fields such as psycholinguistics, information theory, and game theory. diagNNose provides support for several influential analysis techniques, for which we provide a brief background here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Language models have stood at the basis of many successes within NLP in recent years (Peters et al., 2018; Devlin et al., 2019) . These models are trained on the objective of predicting the probability of an upcoming (or masked) token. In order to succeed in this task, these models need to possess a notion of many different linguistic aspects, such as syntax, semantics, and general domain knowledge. One popular line of research that tries to uncover a model's linguistic capacities does this via so-called targeted syntactic evaluations (Linzen et al., 2016; Gulordava et al., 2018; Marvin and Linzen, 2018; Jumelet and Hupkes, 2018) . This type of analysis compares a model's output on minimally different pairs of grammatical and ungrammatical constructions. If it assigns a higher probability to the grammatical construction, the model is said to possess a notion of the underlying linguistic principles, such as subject-verb agreement or NPI licensing:",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "(Peters et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 107,
"end": 127,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 541,
"end": 562,
"text": "(Linzen et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 563,
"end": 586,
"text": "Gulordava et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 587,
"end": 611,
"text": "Marvin and Linzen, 2018;",
"ref_id": "BIBREF18"
},
{
"start": 612,
"end": 637,
"text": "Jumelet and Hupkes, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "(1) a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "The ladies near John walk. b. * The ladies near John walks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "(2) a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "Nobody has ever been there.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "b. * Someone has ever been there.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "diagNNose supports a wide range of syntactic tasks, as well as an interface that allows new tasks to be added without effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "2.1"
},
{
"text": "A second line of work tries to assess a model's understanding of linguistic properties -such as part-of-speech tags or number information -by directly training diagnostic classifiers on top of its representations Adi et al., 2016; Belinkov et al., 2017) . This type of analysis, also referred to as probing, has led to numerous insights into the inner workings of language models (Liu et al., 2019a; Tenney et al., 2019) . The activations diagnostic classifiers are trained on are not restricted to just the hidden states of a language model at their top layer: this can, for instance, also be done on the individual gate activations to reveal patterns at the cell-level of a model (Giulianelli et al., 2018; Lakretz et al., 2019) . Recently, it has been a topic of discussion to what extent a high accuracy of a diagnostic classifier signifies that that property is actively being encoded by the model. Several solutions to assess this have been proposed, such as training a diagnostic classifier on a baseline of random labels (called a control task (Hewitt and Liang, 2019)), or based on the minimum description length of the classifier, a concept from information theory (Voita and Titov, 2020; Pimentel et al., 2020) . diagNNose currently facilitates the training of diagnostic classifiers, as well as training control tasks alongside them.",
"cite_spans": [
{
"start": 213,
"end": 230,
"text": "Adi et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 231,
"end": 253,
"text": "Belinkov et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 380,
"end": 399,
"text": "(Liu et al., 2019a;",
"ref_id": "BIBREF15"
},
{
"start": 400,
"end": 420,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 682,
"end": 708,
"text": "(Giulianelli et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 709,
"end": 730,
"text": "Lakretz et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1175,
"end": 1198,
"text": "(Voita and Titov, 2020;",
"ref_id": "BIBREF33"
},
{
"start": 1199,
"end": 1221,
"text": "Pimentel et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diagnostic Classifiers",
"sec_num": "2.2"
},
{
"text": "Although probing allows us to uncover specific properties that are embedded within the model representations, it is unable to explain how a model transforms its input features into a successful prediction. This question can be addressed by computing the input feature contributions to a subsequent output. This is a challenging task, as the high-dimensional, non-linear nature of deep learning models prevents us from expressing these contributions directly on the basis of the model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Attributions",
"sec_num": "2.3"
},
{
"text": "Feature attributions can be computed in different ways. One common approach to this task is based on a concept that stems from cooperative game theory, called the Shapley value (Shapley, 1953) . A Shapley value expresses the contribution of a player (in our case an input feature) to the outcome of game (in our the case a model prediction). Computing Shapley values is computationally expensive, and several approximation algorithms have therefore been proposed, such as SHAP (Lundberg and Lee, 2017), and Integrated Gradients (Sundararajan et al., 2017) . diagNNose currently facilitates the computation of feature attributions using a technique called Contextual Decomposition (Murdoch et al., 2018) , and its generalisation as proposed by Jumelet et al. (2019) .",
"cite_spans": [
{
"start": 177,
"end": 192,
"text": "(Shapley, 1953)",
"ref_id": "BIBREF28"
},
{
"start": 528,
"end": 555,
"text": "(Sundararajan et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 680,
"end": 702,
"text": "(Murdoch et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 743,
"end": 764,
"text": "Jumelet et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Attributions",
"sec_num": "2.3"
},
{
"text": "3 Library Overview",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Attributions",
"sec_num": "2.3"
},
{
"text": "The library is structured into several modules that can be used as building blocks for an experimental pipeline. We provide an overview of a possible experimental pipeline in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modules",
"sec_num": "3.1"
},
{
"text": "The following core modules stand at the basis of the different pipelines that can be build on top of diagNNose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "models We provide an abstraction over language models, enabling recurrent and Transformer models to derive from the same interface. Importing pre-trained Transformer models is done via the transformers library. For recurrent models we provide a wrapper that enables access to intermediate activations, including gate activations. We also provide functionality that allows to set the initial hidden states of recurrent LMs, based on a sentence or corpus. 1 corpus Corpora are imported as a Dataset from the torchtext package. A Corpus can be transformed into an iterator for processing. Tokenization is done using the transformers tokenizers, allowing tokenization to be done in both a traditional token-per-token fashion, or based on subword units, such as byte pair encodings (Sennrich et al., 2016) .",
"cite_spans": [
{
"start": 777,
"end": 800,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "extract Central to most of the analysis modules is the extraction of activations. We provide an Extractor class that can extract the activations of a model given a corpus. Thanks to our model wrappers activation extraction is not restricted to just the top layer of a model; intermediate (gate) activations can be extracted as well. To facilitate the extraction of larger corpora with limited computational resources, activations can be dumped dynamically to disk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "activations Extracted activations can easily be retrieved using a ActivationReader, providing access to activations that correspond to a specific subset of corpus sentences. We also provide functionality for extracting only a specific subset of activations, based on sentence and token information. This way it is possible, for instance, to only Figure 1 : Pipeline stages for conducting syntactic evaluation and probing experiments. Note the modular nature of the pipeline: activations need only to be extracted once, after which the setup of the analysis experiments can be fine-tuned effortlessly.",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 354,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "extract the activations at the position of tokens of particular interest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "config The pipeline of diagNNose is configuration-driven.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "Configuration is defined in JSON format, but individual attributes can also be set from the command line directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core modules",
"sec_num": "3.1.1"
},
{
"text": "We currently support three main types of experimental modules. We provide a graphical overview of these modules in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis modules",
"sec_num": "3.1.2"
},
{
"text": "syntax The library provides functionality for a large suite of targeted syntactic evaluation tasks. Currently we provide support for the following tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis modules",
"sec_num": "3.1.2"
},
{
"text": "\u2022 The subject-verb agreement corpus of Linzen et al. (2016) , for which we also provide more fine-grained attractor conditions;",
"cite_spans": [
{
"start": 39,
"end": 59,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis modules",
"sec_num": "3.1.2"
},
{
"text": "\u2022 The wide range of linguistic expressions of Marvin and Linzen 2018 Furthermore, the current implementation permits similar types of tasks to be easily added, and we plan on incorporating a larger set of tasks in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis modules",
"sec_num": "3.1.2"
},
{
"text": "probe We provide easy tooling for training diagnostic classifiers Adi et al., 2016) on top of extracted activations, to probe for linguistic information that might be embedded within them. Our extraction module facilitates ",
"cite_spans": [
{
"start": 66,
"end": 83,
"text": "Adi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis modules",
"sec_num": "3.1.2"
},
{
"text": "The athletes above Barbara ... training diagnostic classifiers on top of intermediate activations as well, including gate activations. In recent years it has been pointed out that a high probing accuracy does not necessarily imply that linguistic information is actively being encoded by a model. To address this we have incorporated functionality for Control Tasks (Hewitt and Liang, 2019) , providing more qualitative insights.",
"cite_spans": [
{
"start": 366,
"end": 390,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": null
},
{
"text": "attribute We provide functionality for modelagnostic feature attributions, that allow the output of a model to be decomposed into a sum of contributions. This is achieved by implementing a wrapper over the operations of PyTorch 2 , allowing intermediate feature contributions to be propagated during a forward pass in the model. Our implementation provides a basis for many Shapley-based attribution methods, as it allows different approximation methods to be tested easily. We currently facilitate the approximation procedure of (Generalised) Contextual Decomposition (Murdoch et al., 2018; Jumelet et al., 2019 ), Shapley sampling values (Castro et al., 2009) , and the exact computation of propagated Shapley values. Our implementation is the first model-agnostic implementation of Contextual Decomposition: previous implementations were dependent on a fixed model structure.",
"cite_spans": [
{
"start": 569,
"end": 591,
"text": "(Murdoch et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 592,
"end": 612,
"text": "Jumelet et al., 2019",
"ref_id": "BIBREF10"
},
{
"start": 640,
"end": 661,
"text": "(Castro et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probing Syntactic evaluation",
"sec_num": null
},
{
"text": "diagNNose is released on pip and can be installed using pip install diagnnose, or directly cloned from the GitHub repository: https:// github.com/i-machine-think/diagnnose. The 2 The wrapper is defined based on the __torch_function__ functionality that has been introduced in PyTorch 1.5. library supports Python 3.6 or later, and its core dependencies are PyTorch (Paszke et al., 2019) (v1.5+), torchtext 3 , and HuggingFace's transformers . diagNNose is released under the MIT License (Open Source Initiative, 2020). diagNNose runs both on CPUs and GPUs, and has especially been optimised for smaller consumer setups, due to limited computational resources during development.",
"cite_spans": [
{
"start": 177,
"end": 178,
"text": "2",
"ref_id": null
},
{
"start": 365,
"end": 386,
"text": "(Paszke et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "3.2"
},
{
"text": "The diagNNose code base is fully typed using Python type hints. The code is formatted using Black. 4 All methods and classes are documented, and an overview of this documentation can be found on https://diagnnose.readthedocs.io.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Requirements",
"sec_num": "3.2"
},
{
"text": "To demonstrate the functionality of diagNNose we will consider the subject-verb agreement corpora of Lakretz et al. (2019) on a set of language models. For our experiments we consider the following models: BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019b) , DistilRoBERTa , and the LSTM language model of Gulordava et al. (2018) .",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 243,
"end": 262,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF16"
},
{
"start": 312,
"end": 335,
"text": "Gulordava et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Subject-Verb Agreement",
"sec_num": "4"
},
{
"text": "The corpora of Lakretz et al. (2019) are formed by seven tasks of template-based syntactic constructions. These constructions contain an \"agreement attractor\" in between the subject and the verb, which might trick a language model into predicting the incorrect number of the verb. A model thus needs to possess a strong notion of the structure of a sentence: nouns within a prepositional phrase, for instance, should have no impact on the number of the main verb in a sentence.",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "The seven tasks are defined by the following templates: To test these corpora on a recurrent model, we first compute the model's hidden state at the position of the verb by feeding it the sub-sentence up till that position. Based on this hidden state we compute the output probabilities of the verb of the correct number (v ), and the incorrect number (v ), and compare these:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4.1"
},
{
"text": "P (v | h t ) > P (v | h t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SIMPLE",
"sec_num": null
},
{
"text": "For bi-directional masked language models, such as BERT, we can not compute a model's intermediate hidden state by passing it a sub-sentence, because these models also incorporate the input of future tokens. To solve this, we replace the verb in each sentence with a <mask> token, and assess the model's probabilities at the position of this token. Modern language models often make use of BPE tokenization that might split a word into multiple sub-words. In our experiments we therefore only compare verb forms for which both the plural and singular form are split into a single token. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SIMPLE",
"sec_num": null
},
{
"text": "We run the targeted syntactic evaluation suite on all the 7 templates. An example configuration and script of this experiment in provided in Figure 3a .",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "Figure 3a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "4.2"
},
{
"text": "To run the experiment on a different model, the only configuration that needs to be changed is the model_name. The results of the experiment are shown in Table 1 . It can be seen that the Transformer language models generally achieve higher scores than the LSTM model. Interestingly, the NAMEPP task poses a challenge for all models, and both RoBERTa and DistilRoBERTa score lower on this task than the LSTM. A second point of interest is the difference in performance between RoBERTa and DistilRoBERTa on the NAMEPP and NOUNPP tasks. Even though DistilRoBERTa has been trained to emulate the behaviour of RoBERTa, its performance on a downstream task like this differs significantly. These results can provide a starting point for a more fine-grained analysis, such as creating the feature attributions of a model on a specific template.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Targeted syntactic evaluations",
"sec_num": "4.2"
},
{
"text": "To gain a better insight into why the language models struggle so strongly with the NAMEPP corpus, we run the feature attribution module on these constructions. An example configuration of this experimental setup is provided in Figure 3b . The results for the experiment are shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 237,
"text": "Figure 3b",
"ref_id": null
},
{
"start": 284,
"end": 292,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": "4.3"
},
{
"text": "We show the attributions for DistilRoBERTa on an example sentence from the corpus, which highlights the difference in impact of the intervening attractor on the number of the verb. The results should be interpreted as follows: the score at the top of the attribution denotes the full logit of the model for that class, these are the logits that are transformed into probabilities using SoftMax. This logit is decomposed in a sum of contributions, which we denote at the bottom of each token. It can be validated that the contributions sum up together to the logit. This is an important property of feature attribution methods -called efficiencythat warrants a certain degree of faithfulness of an explanation to the model. A negative value indicates a negative feature contribution to an output class: the impact of that feature led to a decreased preference for the class. Feature attributions also Figure 4 : The feature attributions for DistilRoBERTa on an example sentence from the NAMEPP task of Lakretz et al. (2019) . The logits of two output tokens, 'approve' and 'approves', are decomposed into a sum of contributions.",
"cite_spans": [
{
"start": 1001,
"end": 1022,
"text": "Lakretz et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 900,
"end": 908,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": "4.3"
},
{
"text": "include the influence of model biases: an aggregate of all information that is statically present within the network such as weight intercepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": "4.3"
},
{
"text": "On the presented example sentence, Distil-RoBERTa makes an incorrect prediction: the logit of the incorrect singular form 'approves' is larger than that of the plural 'approve'. The model's misstep in predicting the correct verb form arrives from the fact that the subject 'athletes' provided not enough contribution to overrule the negative contributions stemming from other input features. A model that has a thorough understanding of subjectverb agreement should assign a larger contribution to the subject when predicting the main verb: the number signal provided by the subject should be propagated strongly enough to overrule other interfering signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": "4.3"
},
{
"text": "The attribute module is still in active development. The exponential nature of computing Shapley values makes creating these explanations a challenging task, and we look forward to incorporate other techniques that aim to alleviate the computing costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature attributions",
"sec_num": "4.3"
},
{
"text": "diagNNose provides essential tools for conducting interpretability research, providing cutting edge analysis techniques such as diagnostic classifiers and feature attributions. The modular design of the library allows complex hypotheses to be tested rapidly, and provides a solid basis for the development of novel interpretability techniques. The library code is open source and welcomes others to contribute: we are eagerly looking forward to collaborate on adding new features to the library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As has been noted byJumelet et al. (2019), LSTM LMs perform better when initialised with the phrase \". <eos>\", instead of zero-valued vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/text/ 4 https://github.com/psf/black",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The RoBERTa tokenizer, for example, splits \"confuses\" into \"conf\" + \"uses\", and \"confuse\" into \"confuse\". Comparing the model probabilities for these two forms directly is hence not possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author gratefully acknowledges the feedback received from Dieuwke Hupkes during the development of the library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.04207"
]
},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. arXiv preprint arXiv:1608.04207.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "What do neural machine translation models learn about morphology?",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "861--872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural ma- chine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Polynomial calculation of the shapley value based on sampling",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Castro",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "G\u00f3mez",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Tejada",
"suffix": ""
}
],
"year": 2009,
"venue": "Tenth International Symposium on Locational Decisions (ISOLDE X)",
"volume": "36",
"issue": "",
"pages": "1726--1730",
"other_ids": {
"DOI": [
"10.1016/j.cor.2008.04.004"
]
},
"num": null,
"urls": [],
"raw_text": "Javier Castro, Daniel G\u00f3mez, and Juan Tejada. 2009. Polynomial calculation of the shapley value based on sampling. Computers Operations Research, 36(5):1726 -1730. Selected papers presented at the Tenth International Symposium on Locational Deci- sions (ISOLDE X).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240- 248.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "\u00c9douard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1195--1205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, \u00c9douard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Designing and interpreting probes with control tasks",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2733--2743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Veldhoen",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "61",
"issue": "",
"pages": "907--926",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Do language models understand anything? on the ability of lstms to understand negative polarity items",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "222--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet and Dieuwke Hupkes. 2018. Do lan- guage models understand anything? on the ability of lstms to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 222-231.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analysing neural language models: Contextual decomposition reveals default reasoning in number and gender assignment",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1001"
]
},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes. 2019. Analysing neural language models: Con- textual decomposition reveals default reasoning in number and gender assignment. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 1-11, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Captum: A unified and generic model interpretability library for pytorch",
"authors": [
{
"first": "Narine",
"middle": [],
"last": "Kokhlikyan",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Miglani",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bilal",
"middle": [],
"last": "Alsallakh",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Melnikov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Narine Kokhlikyan, Vivek Miglani, Miguel Mar- tin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushk- ina, Carlos Araya, Siqi Yan, and Orion Reblitz- Richardson. 2020. Captum: A unified and generic model interpretability library for pytorch. CoRR, abs/2009.07896.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The emergence of number and syntax units in LSTM language models",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Lakretz",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Desbordes",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Dehaene",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba- roni. 2019. The emergence of number and syn- tax units in LSTM language models. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11-20, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the 2019 acl workshop blackboxnlp: Analyzing and interpreting neural networks for nlp",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Grzegorz Chrupa\u0142a, Yonatan Belinkov, and Dieuwke Hupkes. 2019. Proceedings of the 2019 acl workshop blackboxnlp: Analyzing and interpret- ing neural networks for nlp. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00115"
]
},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "F",
"middle": [],
"last": "Nelson",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1073--1094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew E Peters, and Noah A Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 1073-1094.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A unified approach to interpreting model predictions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Su-In",
"middle": [],
"last": "Lundberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4765--4774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Ad- vances in neural information processing systems, pages 4765-4774.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Targeted syntactic evaluation of language models",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1192--1202",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1151"
]
},
"num": null,
"urls": [],
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Beyond word importance: Contextual decomposition to extract interactions from lstms",
"authors": [
{
"first": "W",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Murdoch",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. James Murdoch, Peter J. Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposi- tion to extract interactions from lstms. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Open Source Initiative. 2020. The mit license",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Open Source Initiative. 2020. The mit license.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in neural information processing systems, pages 8026-8037.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "2020. Information-theoretic probing for linguistic structure",
"authors": [
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Valvoda",
"suffix": ""
},
{
"first": "Rowan",
"middle": [],
"last": "Hall Maudslay",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Zmigrod",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.03061"
]
},
"num": null,
"urls": [],
"raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. arXiv preprint arXiv:2004.03061.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "jiant: A software toolkit for research on general-purpose text understanding models",
"authors": [
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Yeres",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02249"
]
},
"num": null,
"urls": [],
"raw_text": "Yada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, and Samuel R Bowman. 2020. jiant: A software toolkit for research on general-purpose text understanding models. arXiv preprint arXiv:2003.02249.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A value for n-person games",
"authors": [
{
"first": "Lloyd",
"middle": [
"S"
],
"last": "Shapley",
"suffix": ""
}
],
"year": 1953,
"venue": "Contributions to the Theory of Games",
"volume": "",
"issue": "",
"pages": "307--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloyd S. Shapley. 1953. A value for n-person games. Contributions to the Theory of Games, (28):307- 317.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Axiomatic attribution for deep networks",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Qiqi",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "3319--3328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Aus- tralia, 6-11 August 2017, pages 3319-3328.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bert rediscovers the classical nlp pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "2020. The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Wexler",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mahima",
"middle": [],
"last": "Pushkarna",
"suffix": ""
},
{
"first": "Carey",
"middle": [],
"last": "Radebaugh",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Reif",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. 2020. The language interpretability tool: Extensible, interactive visu- alizations and analysis for NLP models. CoRR, abs/2008.05122.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Informationtheoretic probing with minimum description length",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.12298"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length. arXiv preprint arXiv:2003.12298.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Investigating BERT's knowledge of language: Five analysis methods with NPIs",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ioana",
"middle": [],
"last": "Grosu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "Blix",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Alsop",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Parrish",
"suffix": ""
},
{
"first": "Sheng-Fu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Anhad",
"middle": [],
"last": "Mohananey",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Phu Mon Htut",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Jeretic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2877--2887",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1286"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investi- gating BERT's knowledge of language: Five anal- ysis methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877-2887, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 15-20.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The subject-verb agreement tasks of Lakretz et al. (2019); \u2022 The NPI corpus of Warstadt et al. (2019); \u2022 The stereotypically gendered anaphora resolution corpus of Jumelet et al. (2019), based on the original WinoBias corpus of Zhao et al. (2018)."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Schematic overview of three different types of experiments that are supported by diagNNose."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The athletes approve ADV The uncle probably avoids 2ADV The athlete most probably understands COADV The farmer overtly and deliberately knows NAMEPP The women near John remember NOUNPP The athlete beside the tables approves NOUNPPADV The aunt behind the bikes certainly knows Each task contains 600 to 900 distinct sentences. Sentences are split up into multiple conditions based on the number of the subject, and the number of the intervening noun phrase. The NOUNPP corpus, for instance, is split up into 4 conditions: SS: The athlete beside the table approves SP: The athlete beside the tables approves PS: The athletes beside the table approves PP: The athletes beside the tables approves"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "3a) Example setup for running the targeted syntactic evaluation tasks of Lakretz et al. (2019) on DistilRoBERTa. (3b) Example setup for creating the feature attributions of DistilRoBERTa on a sentence from the NAMEPP corpus of Lakretz et al. (2019)."
},
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">diagnostic</td><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">classifier</td><td/><td/></tr><tr><td>P(approve)</td><td>&gt;</td><td>P(approves)</td><td>\u2022 \u2022 \u2022</td><td>\u2022 \u2022 \u2022</td><td>\u2022 \u2022 \u2022</td><td>\u2022 \u2022 \u2022</td><td/><td>P(approve) = 0.15</td></tr><tr><td/><td>model</td><td/><td/><td/><td>model</td><td/><td/></tr><tr><td/><td/><td/><td>The</td><td>athletes</td><td>above</td><td>Barbara</td><td>The</td><td>athletes</td><td>above</td><td>Barbara</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>0.01</td><td>0.25</td><td>0.01</td><td>-0.12</td></tr></table>",
"type_str": "table",
"text": "POS / NER / dep. parse / etc.",
"num": null
}
}
}
}