MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
Abstract
Advancements in Large Language Models (LLMs) and their increasing use in medical question-answering necessitate rigorous evaluation of their reliability. A critical challenge lies in hallucination, where models generate plausible yet factually incorrect outputs. In the medical domain, this poses serious risks to patient safety and clinical decision-making. To address this, we introduce MedHallu, the first benchmark specifically designed for medical hallucination detection. MedHallu comprises 10,000 high-quality question-answer pairs derived from PubMedQA, with hallucinated answers systematically generated through a controlled pipeline. Our experiments show that state-of-the-art LLMs, including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical, struggle with this binary hallucination detection task, with the best model achieving an F1 score as low as 0.625 for detecting "hard" category hallucinations. Using bidirectional entailment clustering, we show that harder-to-detect hallucinations are semantically closer to ground truth. Through experiments, we also show incorporating domain-specific knowledge and introducing a "not sure" category as one of the answer categories improves the precision and F1 scores by up to 38% relative to baselines.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Fine-tuning Large Language Models for Improving Factuality in Legal Question Answering (2025)
- Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization (2025)
- HALoGEN: Fantastic LLM Hallucinations and Where to Find Them (2025)
- LLM-MedQA: Enhancing Medical Question Answering through Case Studies in Large Language Models (2024)
- Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs (2025)
- Evaluating Hallucination in Large Vision-Language Models based on Context-Aware Object Similarities (2025)
- Reducing Hallucinations of Medical Multimodal Large Language Models with Visual Retrieval-Augmented Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper