Datasets:

ArXiv:
License:
wangxh07 commited on
Commit
983c94d
1 Parent(s): f6a7f07

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+
5
+
6
+ <div align="center">
7
+
8
+ **Editing Conceptual Knowledge for Large Language Models**
9
+
10
+ ---
11
+
12
+ <p align="center">
13
+ <a href="#-conceptual-knowledge-editing">Overview</a> •
14
+ <a href="#-usage">How To Use</a> •
15
+ <a href="#-citation">Citation</a> •
16
+ <a href="https://arxiv.org/abs/2403.06259">Paper</a> •
17
+ <a href="https://zjunlp.github.io/project/ConceptEdit">Website</a>
18
+ </p>
19
+ </div>
20
+
21
+
22
+ ## 💡 Conceptual Knowledge Editing
23
+
24
+ <div align=center>
25
+ <img src="./flow1.gif" width="70%" height="70%" />
26
+ </div>
27
+
28
+ ### Task Definition
29
+
30
+ **Concept** is a generalization of the world in the process of cognition, which represents the shared features and essential characteristics of a class of entities.
31
+ Therefore, the endeavor of concept editing aims to modify the definition of concepts, thereby altering the behavior of LLMs when processing these concepts.
32
+
33
+
34
+ ### Evaluation
35
+
36
+ To analyze conceptual knowledge modification, we adopt the metrics for factual editing (the target is the concept $C$ rather than factual instance $t$).
37
+
38
+ - `Reliability`: the success rate of editing with a given editing description
39
+ - `Generalization`: the success rate of editing **within** the editing scope
40
+ - `Locality`: whether the model's output changes after editing for unrelated inputs
41
+
42
+
43
+ Concept Specific Evaluation Metrics
44
+
45
+ - `Instance Change`: capturing the intricacies of these instance-level changes
46
+ - `Concept Consistency`: the semantic similarity of generated concept definition
47
+
48
+
49
+ ## 🌟 Usage
50
+
51
+ ### 🎍 Current Implementation
52
+ As the main Table of our paper, four editing methods are supported for conceptual knowledge editing.
53
+ | **Method** | GPT-2 | GPT-J | LlaMA2-13B-Chat | Mistral-7B-v0.1
54
+ | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: |
55
+ | FT | ✅ | ✅ | ✅ | ✅ |
56
+ | ROME | ✅ | ✅ |✅ | ✅ |
57
+ | MEMIT | ✅ | ✅ | ✅| ✅ |
58
+ | PROMPT | ✅ | ✅ | ✅ | ✅ |
59
+
60
+
61
+ ### 💻 Run
62
+ You can follow [EasyEdit](https://github.com/zjunlp/EasyEdit/edit/main/examples/ConceptEdit.md) to run the experiments.
63
+
64
+
65
+ ## 📖 Citation
66
+
67
+ Please cite our paper if you use **ConceptEdit** in your work.
68
+
69
+ ```bibtex
70
+ @misc{wang2024editing,
71
+ title={Editing Conceptual Knowledge for Large Language Models},
72
+ author={Xiaohan Wang and Shengyu Mao and Ningyu Zhang and Shumin Deng and Yunzhi Yao and Yue Shen and Lei Liang and Jinjie Gu and Huajun Chen},
73
+ year={2024},
74
+ eprint={2403.06259},
75
+ archivePrefix={arXiv},
76
+ primaryClass={cs.CL}
77
+ }
78
+ ```
79
+
80
+ ## 🎉 Acknowledgement
81
+
82
+ We would like to express our sincere gratitude to [DBpedia](https://www.dbpedia.org/resources/ontology/),[Wikidata](https://www.wikidata.org/wiki/Wikidata:Introduction),[OntoProbe-PLMs](https://github.com/vickywu1022/OntoProbe-PLMs) and [ROME](https://github.com/kmeng01/rome).
83
+
84
+ Their contributions are invaluable to the advancement of our work.