Highlight a token

#1
by Pendrokar - opened
Owner
โ€ข
edited Feb 10

Original video demo shows that it was possible to highlight a word/token to understand which emojis applied to it.
https://www.youtube.com/watch?v=u_JwYxtjzUs

This part of the python code is not in DeepMoji/TorchMoji repos.

Pendrokar changed discussion title from Per token prediction to Highlight a token

hat/tip @bfelbo too your model lives on ๐Ÿ”ฅ

Amazing to see that you've created a demo for DeepMoji/TorchMoji ๐Ÿ™Œ

I dug into the archives and found what we used for highlighting tokens in case it's helpful. The approach was pretty basic, but it worked nicely. The idea is to run the model w/o each word and compute how much the probability distribution changes w/o that word. Here's how we generated the variations:

      it = np.nditer(npa[0], flags=['f_index'])
      while not it.finished:
        if it[0] > 0:
          npan = np.delete(npa[0], it.index)
          npan.resize((1,30))
          npa = np.append(npa, npan, axis=0)
        it.iternext()

We found that small changes towards the end of the probability distribution could have a disproportionate impact on the importance of the word. This makes sense as the training mainly focuses on getting the top predictions right. To counteract this disproportionate impact, we only considered changes in the top 5 emoji predictions:

        ind_top = top_elements(rounded, 5)
        masked = np.zeros_like(rounded)
        masked[ind_top] = rounded[ind_top]

As you can see, the approach is very straightforward. You then compute the differences in probability distribution and highlight the words that cause big differences. Obviously, the downside of this approach is that you'll run the model once for every word. That's also why the demo only allowed sentences with up to ~30 tokens. I'm sure you can come up a more efficient approach. The nice part though is that it literally captures the impact of each word on the prediction.

Hope it's helpful!

Sign up or log in to comment