Vily1998 commited on
Commit
980ed4e
1 Parent(s): a7c0616
Files changed (3) hide show
  1. ._README.md +0 -0
  2. README.md +33 -7
  3. truthx_results.png +0 -0
._README.md ADDED
Binary file (4.1 kB). View file
 
README.md CHANGED
@@ -4,20 +4,46 @@ license: gpl-3.0
4
 
5
  # TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
6
 
7
- > [Shaolei Zhang](https://zhangshaolei1998.github.io/), Tian Yu, [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
8
 
9
- TruthX is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the TruthfulQA benchmark, TruthX yields an average enhancement of 20% in truthfulness across various LLMs.
10
 
11
- This repository provides TruthX models trained on a variety of LLMs:
 
 
 
 
 
 
 
 
 
 
12
  - Llama-1-7B, Alpaca-7B
13
  - Llama-2-7B, Llama-2-7B-Chat, Vicuna-7B-v1.5
14
  - Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2
15
  - Baichuan2-7B-Base, Baichuan2-7B-Chat
16
  - Chatglm3-6B-Base, Chatglm3-6B
17
 
18
- ## Results on TruthfulQA benchmark
19
- - MC1 accuracy on TruthfulQA benchmark. More results refer to the paper.
 
 
 
 
 
 
20
 
21
- ![truthfulqa_result](assert/truthfulqa_result.png)
 
 
 
 
 
 
 
 
 
 
22
 
23
- Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) for specific usage scripts.
 
4
 
5
  # TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
6
 
7
+ > [Shaolei Zhang](https://zhangshaolei1998.github.io/), [Tian Yu](https://tianyu0313.github.io/), [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
8
 
9
+ TruthX models for paper "[TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space](https://arxiv.org/pdf/2402.17811.pdf)".
10
 
11
+ **TruthX** is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the [TruthfulQA benchmark](https://paperswithcode.com/sota/question-answering-on-truthfulqa), TruthX yields an average **enhancement of 20% in truthfulness** across 13 advanced LLMs.
12
+
13
+ <div align="center">
14
+ <img src="./truthx_results.png" alt="img" width="100%" />
15
+ </div>
16
+ <p align="center">
17
+ TruthfulQA MC1 accuracy of TruthX across 13 advanced LLMs
18
+ </p>
19
+
20
+
21
+ This repo provides TruthX models trained on a variety of LLMs:
22
  - Llama-1-7B, Alpaca-7B
23
  - Llama-2-7B, Llama-2-7B-Chat, Vicuna-7B-v1.5
24
  - Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2
25
  - Baichuan2-7B-Base, Baichuan2-7B-Chat
26
  - Chatglm3-6B-Base, Chatglm3-6B
27
 
28
+ Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) and [our paper](https://arxiv.org/pdf/2402.17811.pdf) for more details.
29
+
30
+ ## Licence
31
+ Model weights and the inference code are released under The GNU General Public License v3.0 (GPLv3)
32
+
33
+ ## Citation
34
+
35
+ If this repository is useful for you, please cite as:
36
 
37
+ ```
38
+ @misc{zhang2024truthx,
39
+ title={TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space},
40
+ author={Shaolei Zhang and Tian Yu and Yang Feng},
41
+ year={2024},
42
+ eprint={2402.17811},
43
+ archivePrefix={arXiv},
44
+ primaryClass={cs.CL},
45
+ url={https://arxiv.org/abs/2402.17811}
46
+ }
47
+ ```
48
 
49
+ If you have any questions, feel free to contact `zhangshaolei20z@ict.ac.cn`.
truthx_results.png ADDED