TruthX / README.md
Vily1998
first commit
a7c0616
metadata
license: gpl-3.0

TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space

Shaolei Zhang, Tian Yu, Yang Feng*

TruthX is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the TruthfulQA benchmark, TruthX yields an average enhancement of 20% in truthfulness across various LLMs.

This repository provides TruthX models trained on a variety of LLMs:

  • Llama-1-7B, Alpaca-7B
  • Llama-2-7B, Llama-2-7B-Chat, Vicuna-7B-v1.5
  • Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2
  • Baichuan2-7B-Base, Baichuan2-7B-Chat
  • Chatglm3-6B-Base, Chatglm3-6B

Results on TruthfulQA benchmark

  • MC1 accuracy on TruthfulQA benchmark. More results refer to the paper.

truthfulqa_result

Please refer to GitHub repo for specific usage scripts.