Martijn van Beers commited on
Commit
3b59ebc
1 Parent(s): 4f67e27

Update texts for the extra method

Browse files
Files changed (2) hide show
  1. description.md +7 -4
  2. notice.md +3 -2
description.md CHANGED
@@ -1,7 +1,10 @@
1
  # Attention Rollout -- RoBERTa
2
 
3
- In this demo, we use the RoBERTa language model (optimized for masked language modelling and finetuned for sentiment analysis).
4
- The model predicts for a given sentences whether it expresses a positive, negative or neutral sentiment.
5
- But how does it arrive at its classification? This is, surprisingly perhaps, very difficult to determine.
 
6
 
7
- Abnar & Zuidema (2020) proposed a method for Transformers called "Attention Rollout", which was further refined by Chefer et al. (2021) into **Gradient-weighted Rollout**. Here we compare it to another popular method called **Integrated Gradients**.
 
 
 
1
  # Attention Rollout -- RoBERTa
2
 
3
+ In this demo, we use the RoBERTa language model (optimized for masked language modelling and finetuned
4
+ for sentiment analysis). The model predicts for a given sentences whether it expresses a positive,
5
+ negative or neutral sentiment. But how does it arrive at its classification? This is, surprisingly
6
+ perhaps, very difficult to determine.
7
 
8
+ Abnar & Zuidema (2020) proposed a method for Transformers called **Attention Rollout**, which was further
9
+ refined by Chefer et al. (2021) into **Gradient-weighted Attention Rollout**. Here we compare them to
10
+ another popular method called **Integrated Gradients**.
notice.md CHANGED
@@ -1,6 +1,7 @@
1
- * Shown on the left are the results from gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
 
2
  [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), with rollout recursion upto selected layer, and split out between contribution towards a predicted positive sentiment and a predicted negative sentiment.
3
- * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer. IG integrates gradients over a path between observed word and a baseline (here we use two popular choices of baseline: the unknown word token, or the padding token).
4
 
5
  **Warning**
6
  Both Rollout and IG are so-called "attribution methods". Many such methods have been proposed, all attempting to determine the importance of the words in the input for the final prediction. Note, however, that they only provide a very limited form of "explanation", and that even the best methods often disagree. Attribution methods such as Rollout should not be used as the final word, but as providing initial hypotheses that can be further explored with other methods.
 
1
+ * Shown on the left are the results from attention rollout, as defined by Abnar & Zuidema (2020)
2
+ * In the center are the results from gradient-weighted attention rollout, as defined by [Hila Chefer](https://github.com/hila-chefer)
3
  [(Transformer-MM_explainability)](https://github.com/hila-chefer/Transformer-MM-Explainability/), with rollout recursion upto selected layer, and split out between contribution towards a predicted positive sentiment and a predicted negative sentiment.
4
+ * Layer IG, as implemented in [Captum](https://captum.ai/)(LayerIntegratedGradients), based on gradient w.r.t. selected layer. IG integrates gradients over a path between observed word and a baseline (here we use two popular choices of baseline: the unknown word token, or the padding token).
5
 
6
  **Warning**
7
  Both Rollout and IG are so-called "attribution methods". Many such methods have been proposed, all attempting to determine the importance of the words in the input for the final prediction. Note, however, that they only provide a very limited form of "explanation", and that even the best methods often disagree. Attribution methods such as Rollout should not be used as the final word, but as providing initial hypotheses that can be further explored with other methods.