Chain-of-Verification Reduces Hallucination in Large Language Models Paper • 2309.11495 • Published Sep 20, 2023 • 38
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training Paper • 2410.15460 • Published Oct 20, 2024 • 1
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations Paper • 2410.18860 • Published Oct 24, 2024 • 9
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models Paper • 2411.14257 • Published Nov 21, 2024 • 13
Linear Correlation in LM's Compositional Generalization and Hallucination Paper • 2502.04520 • Published 20 days ago • 10
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering Paper • 2502.03628 • Published 21 days ago • 11