Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
gsartiΒ 
posted an update Jan 17
Post
πŸ’₯ Today's pick in Interpretability & Analysis of LMs: Fine-grained Hallucination Detection and Editing For Language Models by @abhika-m @akariasai @vidhisha et al.

Authors introduce a new taxonomy for fine-grained annotation of hallucinations in LM generations and propose Factuality Verification with Augmented Knowledge (FAVA), a retrieval-augmented LM fine-tuned to detect and edit hallucinations in LM outputs, outperforming ChatGPT and LLama2 Chat on both detection and editing tasks.

🌐 Website: https://fine-grained-hallucination.github.io
πŸ“„ Paper: Fine-grained Hallucination Detection and Editing for Language Models (2401.06855)
πŸš€ Demo: fava-uw/fava
πŸ€– Model: fava-uw/fava-model
πŸ”‘ Dataset: fava-uw/fava-data
In this post