Collections
Discover the best community collections!
Collections including paper arxiv:2404.03646
-
Advancing LLM Reasoning Generalists with Preference Trees
Paper • 2404.02078 • Published • 41 -
Locating and Editing Factual Associations in Mamba
Paper • 2404.03646 • Published • 3 -
Locating and Editing Factual Associations in GPT
Paper • 2202.05262 • Published • 1 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 93
-
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention
Paper • 2310.00535 • Published • 2 -
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Paper • 2211.00593 • Published • 2 -
Rethinking Interpretability in the Era of Large Language Models
Paper • 2402.01761 • Published • 18 -
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Paper • 2307.09458 • Published • 9
-
Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM
Paper • 2403.07487 • Published • 12 -
LocalMamba: Visual State Space Model with Windowed Selective Scan
Paper • 2403.09338 • Published • 7 -
Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference
Paper • 2403.14520 • Published • 31 -
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series
Paper • 2403.15360 • Published • 11
-
A Primer on the Inner Workings of Transformer-based Language Models
Paper • 2405.00208 • Published • 6 -
What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation
Paper • 2404.07129 • Published • 3 -
LM Transparency Tool: Interactive Tool for Analyzing Transformer Language Models
Paper • 2404.07004 • Published • 3 -
Does Transformer Interpretability Transfer to RNNs?
Paper • 2404.05971 • Published • 3