Lev McKinney commited on
Commit
d72c970
1 Parent(s): 04cc41f

Added examples

Browse files
Files changed (1) hide show
  1. app.py +23 -1
app.py CHANGED
@@ -56,9 +56,31 @@ A lens into a transformer with n layers allows you to replace the last $m$ layer
56
 
57
  This essentially skips over these last few layers and lets you see the best prediction that can be made from the model's representations, i.e. the residual stream, at layer $n - m$. Since the representations may be rotated, shifted, or stretched from layer to layer it's useful to train the len's affine adapters specifically on each layer. This training is what differentiates this method from simpler approaches that decode the residual stream of the network directly using the unembeding layer i.e. the logit lens. We explain this process in [the paper](https://arxiv.org/abs/2303.08112).
58
 
59
-
60
  ## Usage
61
  Since the tuned lens produces a distribution of predictions to visualize it's output we need to we need to provide a summary statistic to plot. The default is simply [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)), but you can also choose the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) with the target token, or the [KL divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between the model's predictions and the tuned lens' predictions. You can also hover over a token to see more of the distribution i.e. the top 10 most probable tokens and their probabilities.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  """
63
 
64
  with gr.Blocks() as demo:
 
56
 
57
  This essentially skips over these last few layers and lets you see the best prediction that can be made from the model's representations, i.e. the residual stream, at layer $n - m$. Since the representations may be rotated, shifted, or stretched from layer to layer it's useful to train the len's affine adapters specifically on each layer. This training is what differentiates this method from simpler approaches that decode the residual stream of the network directly using the unembeding layer i.e. the logit lens. We explain this process in [the paper](https://arxiv.org/abs/2303.08112).
58
 
 
59
  ## Usage
60
  Since the tuned lens produces a distribution of predictions to visualize it's output we need to we need to provide a summary statistic to plot. The default is simply [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)), but you can also choose the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) with the target token, or the [KL divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between the model's predictions and the tuned lens' predictions. You can also hover over a token to see more of the distribution i.e. the top 10 most probable tokens and their probabilities.
61
+
62
+ ## Examples
63
+ Here are some interesting examples you can try.
64
+
65
+ ### Copy pasting:
66
+ ```
67
+ Copy: A!2j!#u&NGApS&MkkHe8Gm!#
68
+ Paste: A!2j!#u&NGApS&MkkHe8Gm!#
69
+ ```
70
+
71
+ ### Trivial in-context learning
72
+ ```
73
+ inc 1 2
74
+ inc 4 5
75
+ inc 13
76
+ ```
77
+
78
+ #### Addition
79
+ ```
80
+ add 1 1 2
81
+ add 3 4 7
82
+ add 13 2
83
+ ```
84
  """
85
 
86
  with gr.Blocks() as demo: