Beyond Traditional Fine-tuning: Exploring Advanced Techniques to Mitigate LLM Hallucinations

Community Article Published February 11, 2024

Large language models (LLMs) have revolutionized text generation, but their tendency to produce factually incorrect and nonsensical outputs, known as "hallucinations," remains a major concern. Yesterday I read an info-packed paper named "A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models". I guess it has all the prominent methods used for Hallucination Mitigation. So let's unpack what it has.

Hallucination

image/png

Hallucination in LLMs refers to the models' tendency to generate text that appears factual but is entirely fabricated or not grounded in reality. Leading to decreased model accuracy, misleading insights, biased and contradictory outputs, unrealistic narratives, and many more. So in simple terms, Hallucination is when the LLMs try to fluke when they don't know the answer(just kidding😃😃)

Technique for hallucination mitigation

Researchers have proposed a diverse range of techniques to mitigate hallucinations in LLMs.Hallucination mitigation techniques are divided into two types: Prompt Engineering and Developing Methods. Prompt Engineering is further divided into 2 parts RAG and Self Refinement through Feedback and Reasoning and Prompt Tuning.

Retrieval-Augmented Generation (RAG)

image/png

RAG, short for Retrieval-Augmented Generation, is a technique that combines retrieval-based and generative-based methods to improve the performance of LLMs.The retrieval module searches for the relevant information from the external source and the generation module uses the retrieved information to produce the response of the LLM. Many techniques go under RAG. Some of them are:

  1. LLM Augumentor - It modifies the internal parameter to adapt LLM to specific tasks by adding small modules to the LLM architecture and then fine-tuning for the target task.
  2. FreshPrompt- Retrieve external information relevant to the user query from the updated search engine and create an LLM response using that.
  3. Knowledge Retrieval- LM uses relevant knowledge from the external source and uses keyword search and embedding-based retrieval to find relevant information to produce the response.
  4. Decompose and Query framework breaks the user query into small questions and for each sub-question, its relevant response is generated by the LLMs.
  5. High Entropy Word Spotting and Replacement technique improves the creativity and diversity of the LLMs by identifying words with high entropy and replacing them using synonym search, random sampling or knowledge replacement.

Self Refinement through Feedback and Reasoning

Self-refinement through Feedback and Reasoning is a novel approach for large language models (LLMs) to improve their outputs iteratively. It leverages feedback-based learning and reasoning abilities to achieve better factuality, consistency, and relevance in generated text. There are many techniques used in Self-Refinement through Feedback and Reasoning such as ChatProtect, Self Reflection Method, Structured Comparative Reasoning, Chain of Verification(CoVe), Chain of Natural Language Inference(CoNLI), etc.

Prompt Tuning

It is the practice of tailoring prompts to guide LLMs to get desired outputs. It avoids the need for extensive retraining, making it a powerful and efficient tool.

Developing Methods

Many developing methods can also be effective techniques used for mitigating the hallucinations of LLMs.Some of them are:

1.Context-Aware Decoding (CAD) It combats LLM hallucinations by integrating semantic context vectors into the decoding process. These vectors capture the meaning of the entire context not just a specific word(such as in the attention mechanism).CAD is particularly effective in overriding a model’s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where knowledge conflict is possible.

2.Decoding by Contrasting Layers (DoLa)It is a simple decoding strategy designed to mitigate hallucinations in pre-trained LLMs without the need for external knowledge conditioning or additional finetuning. DoLa achieves the next-token distribution by contrasting logit differences between later and earlier layers projected into the vocabulary space. It enhances the identification of factual knowledge and minimizes the generation of incorrect facts.

3.Supervised fine-tuning (SFT) It is a technique to adapt a pre-trained LLM to a target task using labeled data by fine-tuning the LLM parameters according to the target task. As only a subset of parameters are updated, therefore, SFT usually requires less computational power and training time compared to full fine-tuning.

The journey to harnessing the full potential of LLMs requires tackling the persistent issue of hallucinations. While traditional fine-tuning has its limitations, exciting new techniques like RAG, Self-Refinement, and Context-Aware Decoding offer promising solutions. As we delve deeper into these methods, questions arise:

Which techniques hold the most potential for specific research domains or tasks? Can we combine these methods for even more robust hallucination mitigation?

These are just a few sparks to ignite the discussion. Share your thoughts, experiences, and questions in the comments below! Let's work together to build a future where LLMs are not just powerful, but also reliable and trustworthy partners in our endeavors.