The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models Jan 29 • 17
Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering Paper • 2410.15999 • Published Oct 21 • 19
🔍 Daily Picks in Interpretability & Analysis of LMs Collection Outstanding research in interpretability and evaluation of language models, summarized • 90 items • Updated 6 days ago • 93
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering Paper • 2301.12313 • Published Jan 29, 2023
Attention Is All You Need But You Don't Need All Of It For Inference of Large Language Models Paper • 2407.15516 • Published Jul 22 • 1
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations Paper • 2410.18860 • Published Oct 24 • 9