Rigorously Assessing Natural Language Explanations of Neurons Paper • 2309.10312 • Published Sep 19, 2023
MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions Paper • 2305.14795 • Published May 24, 2023
A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments Paper • 2401.12631 • Published Jan 23
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions Paper • 2403.07809 • Published Mar 12 • 1
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior Paper • 2205.14140 • Published May 27, 2022
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca Paper • 2305.08809 • Published May 15, 2023 • 2