paul hilders commited on
Commit
9fa77d1
1 Parent(s): 05b69fa

Add link to github

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -146,13 +146,13 @@ with demo_tabs:
146
  gr.Markdown("""
147
  ### Acknowledgements
148
  This demo was developed for the Interpretability & Explainability in AI course at the University of
149
- Amsterdam. We would like express our thanks to Jelle Zuidema, Jaap Jumelet, Tom Kersten, Christos
150
  Athanasiadis, Peter Heemskerk, Zhi Zhang, and all the other TAs who helped us during this course.
151
 
152
  ---
153
  ### References
154
- \[1\]: Chefer, H., Gur, S., & Wolf, L. (2021). Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 397-406). <br>
155
  \[2\]: Abnar, S., & Zuidema, W. (2020). Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928. <br>
156
- \[3\]: https://samiraabnar.github.io/articles/2020-04/attention_flow <br>
157
  """)
158
  demo_tabs.launch(show_error=True)
 
146
  gr.Markdown("""
147
  ### Acknowledgements
148
  This demo was developed for the Interpretability & Explainability in AI course at the University of
149
+ Amsterdam. We would like to express our thanks to Jelle Zuidema, Jaap Jumelet, Tom Kersten, Christos
150
  Athanasiadis, Peter Heemskerk, Zhi Zhang, and all the other TAs who helped us during this course.
151
 
152
  ---
153
  ### References
154
+ \[1\]: Chefer, H., Gur, S., & Wolf, L. (2021). Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. <br>
155
  \[2\]: Abnar, S., & Zuidema, W. (2020). Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928. <br>
156
+ \[3\]: [https://samiraabnar.github.io/articles/2020-04/attention_flow](https://samiraabnar.github.io/articles/2020-04/attention_flow) <br>
157
  """)
158
  demo_tabs.launch(show_error=True)