paulbricman commited on
Commit
9b67bb3
β€’
1 Parent(s): d8da9a4

feat: integrate write-up

Browse files
Files changed (4) hide show
  1. __pycache__/util.cpython-38.pyc +0 -0
  2. main.py +11 -5
  3. util.py +1 -2
  4. write-up.txt +25 -0
__pycache__/util.cpython-38.pyc CHANGED
Binary files a/__pycache__/util.cpython-38.pyc and b/__pycache__/util.cpython-38.pyc differ
main.py CHANGED
@@ -24,20 +24,23 @@ st.markdown('---')
24
 
25
  cols = st.columns([1, 1])
26
  query = st.sidebar.text_input(
27
- 'driving query', help='Specify the overarching query which will drive the salience map.')
28
- duration = st.sidebar.slider('pulse duration (seconds)', 0., 5., step=0.1, value=1.,
29
  help='Specify how long the pulse should take')
30
- focus = st.sidebar.slider('focus strength', 0., 1., step=0.01, value=0.8,
31
  help='Specify how sharp the focus of the salience map should be. Low focus means the salience is distributed more broadly across tokens. High focus means only a handful of tokens will be attended to. `softmax_temperature = 1 - focus`')
32
  color = st.sidebar.color_picker(
33
  'halo color', help='Specify the color of the halo around tokens being attended to.', value='#2160EA')
34
 
35
  font_family = st.sidebar.selectbox(
36
- 'font family', sorted(['Monospace', 'Times New Roman', 'Arial', 'Helvetica', 'Courier', 'Calibri', 'Georgia', 'Space Grotesk']))
37
  font_size = st.sidebar.slider('font size', 10, 20, step=1, value=14,
38
  help='Specify how big the text should be.')
39
 
40
  style = f'''
 
 
 
41
  <style>
42
  container {{
43
  font-size: {font_size}pt;
@@ -87,7 +90,10 @@ container {{
87
  }}
88
  </style>'''
89
 
90
- if 'content' not in st.session_state.keys() or st.session_state['content'] == None:
 
 
 
91
  content = st.text_area('content', height=300)
92
  if st.button('save'):
93
  st.session_state['content'] = content
24
 
25
  cols = st.columns([1, 1])
26
  query = st.sidebar.text_input(
27
+ 'driving query', help='Specify the overarching query which will drive the salience map.', value='What are perceptual engines?')
28
+ duration = st.sidebar.slider('pulse duration (seconds)', 0., 5., step=0.1, value=2.,
29
  help='Specify how long the pulse should take')
30
+ focus = st.sidebar.slider('focus strength', 0., 1., step=0.01, value=1.,
31
  help='Specify how sharp the focus of the salience map should be. Low focus means the salience is distributed more broadly across tokens. High focus means only a handful of tokens will be attended to. `softmax_temperature = 1 - focus`')
32
  color = st.sidebar.color_picker(
33
  'halo color', help='Specify the color of the halo around tokens being attended to.', value='#2160EA')
34
 
35
  font_family = st.sidebar.selectbox(
36
+ 'font family', ['Space Grotesk', 'Monospace', 'Times New Roman', 'Arial', 'Helvetica', 'Courier', 'Calibri', 'Georgia'])
37
  font_size = st.sidebar.slider('font size', 10, 20, step=1, value=14,
38
  help='Specify how big the text should be.')
39
 
40
  style = f'''
41
+ <link rel="preconnect" href="https://fonts.googleapis.com">
42
+ <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
43
+ <link href="https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@300;400&display=swap" rel="stylesheet">
44
  <style>
45
  container {{
46
  font-size: {font_size}pt;
90
  }}
91
  </style>'''
92
 
93
+ if 'content' not in st.session_state.keys():
94
+ st.session_state['content'] = open('write-up.txt').read()
95
+
96
+ if st.session_state['content'] == None:
97
  content = st.text_area('content', height=300)
98
  if st.button('save'):
99
  st.session_state['content'] = content
util.py CHANGED
@@ -10,7 +10,7 @@ import pandas as pd
10
 
11
  def attend(corpus, query, model, tokenizer, blacklist=False):
12
  token_blacklist = [119, 136, 106]
13
- query = '\n\n---\n\n' + query
14
  full_ids = tokenizer(corpus + '\n\n' + query,
15
  return_tensors='pt')['input_ids']
16
  query_ids = tokenizer(query,
@@ -20,7 +20,6 @@ def attend(corpus, query, model, tokenizer, blacklist=False):
20
 
21
  attention = [[e.detach().numpy()[0]]
22
  for e in model(full_ids)[-1]][-2]
23
- print(np.array(attention).shape)
24
  attention = np.array([e[1:-1]
25
  for e in np.mean(attention, axis=(0, 1))[1:-1]])
26
 
10
 
11
  def attend(corpus, query, model, tokenizer, blacklist=False):
12
  token_blacklist = [119, 136, 106]
13
+ query = query
14
  full_ids = tokenizer(corpus + '\n\n' + query,
15
  return_tensors='pt')['input_ids']
16
  query_ids = tokenizer(query,
20
 
21
  attention = [[e.detach().numpy()[0]]
22
  for e in model(full_ids)[-1]][-2]
 
23
  attention = np.array([e[1:-1]
24
  for e in np.mean(attention, axis=(0, 1))[1:-1]])
25
 
write-up.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ tldr: I render transformer self-attention explicit as a pulsing saliency map overlayed on a piece text, in order to help guide the user's attention across it.
2
+
3
+ This time, we'll try something a bit different. This project's demo and write-up are one and the same, as they explore a new mechanic for digesting content. What better place, then, for playing around with this way of navigating information than the write-up itself? The way I suggest going about it is to first read the content through, before trying out the interactive bits of this page. This way, even if reading the whole text beforehand defeats the purpose of the mechanic, you'll manage to get a sense of what you could use it for in the future.
4
+
5
+ Over the past few months, I explored a couple different approaches for navigating large amounts of knowledge. The lexiscore attempted to rank content items by a Shannon-esque estimate of how much information they provide you with, while the decontextualizer attempted to extract self-contained stand-alone snippets from those documents. My understanding of how to best build tools for navigating large corpora evolved quite a bit during those months, and I now feel it's particularly fruitful to frame those tools as tiny building blocks of broader and more integrated perceptual engines. In contrast to search engines, perceptual engines aim to be more abstractive (than extractive), more interactive (than a static dump of results), and richer in top-down influences (than a single text query to rule them all). In this context, semantic salience is a building block meant to be placed relatively late in the processing pathway β€” only after mountains of data have been compressed so as to pass through the bottleneck of the user's actual sensorium.
6
+
7
+ The motivation behind semantic salience specifically is that text as we know it β€” written symbols stringed together on screen β€” is far from being a humane representation of thought. Just think of how long it takes to perceive a natural image compared to reading a thousand words describing it, perhaps a second versus a few minutes. Assuming the old adage is decently accurate in comparing the amount of information contained in the depiction and the description, it's quite obvious that text is not particularly ergonomic for people as a medium for encoding information. We invented paragraphs and outlines to help us heirarchically navigate it to a first approximation, and various formatting tools to make certain bits stand out β€” both of which feel rudimentary, even forced, approaches to coerce text into being more brain-friendly. On the other hand, NLP models trained on large bodies of text have it "in their blood" β€” they're designed from the ground up to read, attend to, form memories of, and write text. A lot of text, but just text.
8
+
9
+ Given this discrepancy, what if we borrowed the ability to perceive text effectively from NLP models? I'm not referring to asking them to write new and more brain-friendly (i.e. shorter) text (e.g. a summary, answers to questions). That would surely have its place somewhere in a perceptual engine, but I'm wondering here whether we could specifically borrow the very ability to perceive a text as a whole. It could be the original text, or the result of a previous processing step.
10
+
11
+ A promising place where this deeper human-machine integration could happen is at the level of attention. Interestingly enough, many recent NLP models incorporate what are called self-attention and cross-attention layers. Both flavors are involved in helping the model as a whole figure out what it should attend to, what things it should allocate its representational resources on. For instance, in machine translation β€” arguably the birthplace of attention layers β€” a model learns to pay attention to different parts of an English sentence as it writes it in French. It doesn't encode the whole English sentence before regurgitating a French translation, but "keeps an eye on" relevant parts of the input as it writes different parts of the output. This lead to better results, especially in situations where the input was way too complex to represent at once.
12
+
13
+ That's all great, the ability to attend to the right things helped NLP models (and soon after ML models in general) get better at what they were doing before: machine translation, language modeling, etc. However, an underrated side-effect of those improvements is the fact that the trained model knows what is relevant to attend to! You can reverse engineer what specific parts of the input the model is attending to as it's doing its job. That's super interesting! You get a glimpse into its idealized perception mechanism, and get a feel for what it's looking at. This is of course useful as an interpratability tool, as an "input features" type of local explanation: if the image classifier looks at the snow around the dog, rather than at the dog itself, when trying to determine whether there's a Husky in the picture, then it's time to head back to the drawing board. There are dozens other interpretability techniques, all useful debugging tools by day and juicy human-machine synergies by night (working on a novel one as part of my bachelor's project!). However, I chose to focus on attention here due to the obvious analogy to cognitive psychology.
14
+
15
+ Coming back to semantic salience as a tiny building block of perceptual engines, how can those attention layers actually help us perceive a long text? One possible approach is to use the artificial attention deployed by the NLP model to guide human attention. We can let the model do its job (e.g. compute semantic embeddings), then simply disregard the main outputs, and instead look under the hood for the attention matrices derived as a side-product. After mean-pooling across the attention heads, filtering for the attention used to specifically inform the representation of a custom query, and cleaning up the resulting values a bit more based on their overall distribution, we get a compact human-readable explanation of what the model is attending to. If we then take those values and represent them using cognitively ergonomic pre-attentive features like color and movement, we can finally guide the user into paying attention to what the machine is paying attention, weaving together parts of their perceptual systems.
16
+
17
+ A few more technical details. I opted for the bert-base-cased model, working with all its attention heads from its second-to-last layer. Each paragraph is processed separately, after appending the user's custom query to it behind the scenes. I went with processing individual paragraphs instead of the whole document at once because the model appeared to have a bias towards the last part of the document. It makes a lot of sense β€” the last couple paragraphs are particularly relevant for understanding what comes next. The tokens which are particularly salient for the NLP model get marked with custom CSS styling in order to grab the user's attention. This includes color and a pulsing animation as a means to cover those pre-attentive features. I made the formatting quite customizable in order for people to play around with different visual representations of the model's attention, see how it feels.
18
+
19
+ The two most important settings to be configured are the driving query and the amount of focus. On one hand, the driving query forces the model to attend to the parts of the text which help inform its understanding of what the query means. It could roughly be seen as a top-down influence on the way the model's attention gets deployed. The query could be a short noun phrase, a complete question, or whatever fits in a text box. On the other hand, the amount of focus determines how sparse or narrow the saliency map should be. Low focus leads to the model's attention being spread out across the whole text, while high focus sharpens the saliency map on the few most relevant tokens.
20
+
21
+ While this project focused on using artificial attention to help people perceive text, the same ability could be repurposed in a few different ways. For instance, what if you used a similar saliency map to get a better feel for how two written notes relate to each other? Treat one as the query and the other as the main document, and you can guide user attention towards the potential connections. Also, if you have a small set of notes, each color-coded using a subtle color palette, you might learn how to perceive the way they all relate to each other, by color-coding the saliency maps accordingly. "Ah, so this note might relate to the light blue one through those blue highlights, and to the light green one through those green ones, now I see it..." Perhaps with a direct graph in the background depicting how information flows among those, how they inform each other's meaning. Think ad-hoc continuous links. Selecting a bit of text might get you to other notes with a probability based on the strength of the color-coded saliency map at that location. But what if the user selection, too, would be a continuously-updated continuous distribution over tokens instead of a discrete snippet? Eye tracking? Intent recognition based on interaction history and user model? I sometimes feel that every month I put together a toy project on which I could easily spend a few years working on, looking into its ramifications.
22
+
23
+ Before I end this, the analogy between organic and artificial attention goes way deeper than the surface feature of "focusing on specific things." For instance, it has been hypothesized that endogenous attention (i.e. intentionally paying attention to something, rather than something grabbing your attention) is partially realized in the brain by means of firing synchrony: higher-lever neuron clusters (allegedly coding more abstract concepts) encouraging particular signals in lower-level ones (allegedly coding raw sensory information) by means of synchronized firing. Similarly, artificial attention is roughly implemented by selecting for raw signals which are aligned with higher-level "queries" by means of a dot product. It would be really interesting if top-down human queries (e.g. What are perceptual engines?) could directly guide bottom-up artificial processing without having to write out the query in text, and instead picking it up neurally or predicting it.
24
+
25
+ To wrap up, borrowing from an NLP model's native ability to attend to different parts of a text might, in turn, help us deploy our attention effectively. Now, this is only a "last-mile" solution to perceiving large corpora β€” it helps you find stuff that's already on screen, and doesn't focus on deciding what makes it there. Still, this project has been a useful exercise in thinking more about my wishlist of features and the architecture of perceptual engines. Give it a shot! Try setting a driving query in the sidebar, play around with the focus, or even add your own text by reseting the content.