Vily1998 commited on
Commit
2d186e9
1 Parent(s): a2c1ca1
Files changed (2) hide show
  1. ._README.md +0 -0
  2. README.md +27 -2
._README.md CHANGED
Binary files a/._README.md and b/._README.md differ
 
README.md CHANGED
@@ -3,7 +3,10 @@ license: gpl-3.0
3
  ---
4
  # TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
5
 
6
- > [Shaolei Zhang](https://zhangshaolei1998.github.io/), Tian Yu, [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
 
 
 
7
 
8
  **TruthX** is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the [TruthfulQA benchmark](https://paperswithcode.com/sota/question-answering-on-truthfulqa), TruthX yields an average **enhancement of 20% in truthfulness** across 13 advanced LLMs.
9
 
@@ -34,4 +37,26 @@ outputs_text = tokenizer.decode(outputs, skip_special_tokens=True).strip()
34
  print(outputs_text)
35
  ```
36
 
37
- Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) and our paper for more details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
  # TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
5
 
6
+ > [Shaolei Zhang](https://zhangshaolei1998.github.io/), [Tian Yu](https://tianyu0313.github.io/), [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
7
+
8
+ Model for paper "[TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space](https://arxiv.org/pdf/2402.17811.pdf)".
9
+
10
 
11
  **TruthX** is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the [TruthfulQA benchmark](https://paperswithcode.com/sota/question-answering-on-truthfulqa), TruthX yields an average **enhancement of 20% in truthfulness** across 13 advanced LLMs.
12
 
 
37
  print(outputs_text)
38
  ```
39
 
40
+
41
+ Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) and [our paper](https://arxiv.org/pdf/2402.17811.pdf) for more details.
42
+
43
+ ## Licence
44
+ Model weights and the inference code are released under The GNU General Public License v3.0 (GPLv3)
45
+
46
+ ## Citation
47
+
48
+ If this repository is useful for you, please cite as:
49
+
50
+ ```
51
+ @misc{zhang2024truthx,
52
+ title={TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space},
53
+ author={Shaolei Zhang and Tian Yu and Yang Feng},
54
+ year={2024},
55
+ eprint={2402.17811},
56
+ archivePrefix={arXiv},
57
+ primaryClass={cs.CL},
58
+ url={https://arxiv.org/abs/2402.17811}
59
+ }
60
+ ```
61
+
62
+ If you have any questions, feel free to contact `zhangshaolei20z@ict.ac.cn`.