Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations Paper • 2408.10920 • Published Aug 20 • 1
NNsight and NDIF: Democratizing Access to Foundation Model Internals Paper • 2407.14561 • Published Jul 18 • 33
NNsight and NDIF: Democratizing Access to Foundation Model Internals Paper • 2407.14561 • Published Jul 18 • 33
Reframing Human-AI Collaboration for Generating Free-Text Explanations Paper • 2112.08674 • Published Dec 16, 2021
Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing Paper • 2102.12060 • Published Feb 24, 2021
Attentiveness to Answer Choices Doesn't Always Entail High QA Accuracy Paper • 2305.14596 • Published May 24, 2023 • 1
The Unreasonable Effectiveness of Easy Training Data for Hard Tasks Paper • 2401.06751 • Published Jan 12 • 1
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning Paper • 2305.19426 • Published May 30, 2023
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior Paper • 2205.14140 • Published May 27, 2022
Rigorously Assessing Natural Language Explanations of Neurons Paper • 2309.10312 • Published Sep 19, 2023
Linear Representations of Sentiment in Large Language Models Paper • 2310.15154 • Published Oct 23, 2023
Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation Paper • 2004.14623 • Published Apr 30, 2020
A Reply to Makelov et al. (2023)'s "Interpretability Illusion" Arguments Paper • 2401.12631 • Published Jan 23
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions Paper • 2403.07809 • Published Mar 12 • 1
Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations Paper • 2303.02536 • Published Mar 5, 2023 • 1
RAVEL: Evaluating Interpretability Methods on Disentangling Language Model Representations Paper • 2402.17700 • Published Feb 27 • 2