Running
🐠
This is a collection of models and spaces associated with the paper: "Disentangling and Integrating Relational and Sensory Information in Transformer"
Note Generate text with Dual Attention Transformer Language Models. (Note that inference can be slow since this is running on HF's free CPU resources. For faster inference, you can run the app locally. )
Note Visualize the internal representations of Dual Attention Transformer Language Models. Explore the relational representations in relational attention.