Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
santiviquezย 
posted an update Feb 13
Post
Hey GPT, check yourself...

Here is a black-box method for hallucination detection that shows strong correlation with human annotations. ๐Ÿ”ฅ

๐Ÿ’ก The idea is the following: ask GPT, or any other powerful LLM, to sample multiple answers for the same prompt, and then ask it if these answers align with the statements in the original output. Make it say yes/no and measure the frequency with which the generated samples support the original statements.

This method is called SelfCheckGPT with Prompt and shows very nice results. ๐Ÿ‘€

The downside, we have to do many LLM calls just to evaluate a single generated paragraph... ๐Ÿ™ƒ

More details and variations of this method are in the paper: SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning (2308.00436)
In this post