Martijn van Beers
Move text and examples into separate files
adf3a47
|
raw
history blame contribute delete
No virus
755 Bytes

A newer version of the Gradio SDK is available: 4.37.2

Upgrade

Acknowledgements

This demo was developed for the Interpretability & Explainability in AI course at the University of Amsterdam. We would like to express our thanks to Jelle Zuidema, Jaap Jumelet, Tom Kersten, Christos Athanasiadis, Peter Heemskerk, Zhi Zhang, and all the other TAs who helped us during this course.


References

[1]: Chefer, H., Gur, S., & Wolf, L. (2021). Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers.
[2]: Abnar, S., & Zuidema, W. (2020). Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928.
[3]: https://samiraabnar.github.io/articles/2020-04/attention_flow