pratyushmaini commited on
Commit
9f50e53
1 Parent(s): 8489298

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ language: en
4
+ tags:
5
+ - phi-1.5
6
+ - unlearning
7
+ - TOFU
8
+ license: mit
9
+ ---
10
+
11
+ # Phi-1.5 TOFU Unlearning Model
12
+
13
+ This model is a variant of the Phi-1.5 model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset and then subjected to various unlearning algorithms.
14
+
15
+ ## Model Details
16
+
17
+ - **Base Model**: Phi-1.5
18
+ - **Training**: Fine-tuned on TOFU dataset
19
+ - **Unlearning**: Applied various unlearning algorithms
20
+
21
+ ## Unlearning Algorithm
22
+
23
+ This model uses the `KL_1e-05` unlearning algorithm with the following parameters:
24
+ - Learning Rate: `forget05`
25
+ - Forget Percentage: `N/A%`
26
+
27
+
28
+ ## Revisions
29
+
30
+ The model is organized into multiple revisions, each representing a checkpoint during the unlearning process. The revision names follow the pattern `checkpoint-X`, where X is the checkpoint number.
31
+
32
+ ## Loading the Model
33
+
34
+ To load a specific revision of this model, you can use the following code:
35
+
36
+ ```python
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ # Replace 'checkpoint-X' with the desired revision (e.g., 'checkpoint-12')
40
+ revision = "checkpoint-X"
41
+
42
+ model = AutoModelForCausalLM.from_pretrained("locuslab/{model_name}", revision=revision)
43
+ tokenizer = AutoTokenizer.from_pretrained("locuslab/{model_name}", revision=revision)
44
+ ```
45
+
46
+ ## TOFU Dataset
47
+
48
+ TOFU (Task of Fictitious Unlearning) is a dataset designed for training and evaluating unlearning algorithms in language models. It simulates scenarios where certain information needs to be "forgotten" or removed from the model's knowledge.
49
+
50
+ ## Unlearning Process
51
+
52
+ 1. The base Phi-1.5 model was first fine-tuned on the TOFU dataset (checkpoint-625).
53
+ 2. Various unlearning algorithms were then applied to this fine-tuned model to selectively "forget" certain information.
54
+ 3. The results of these unlearning processes are captured in the different revisions of this model.
55
+
56
+ ## Usage and Limitations
57
+
58
+ This model is primarily intended for research purposes, particularly in the field of machine unlearning and privacy in language models. It may not be suitable for general-purpose language tasks without further evaluation.
59
+
60
+ ## Citation
61
+
62
+ If you use this model in your research, please cite:
63
+ ```
64
+ @misc{tofu2024,
65
+ title={TOFU: A Task of Fictitious Unlearning for LLMs},
66
+ author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
67
+ year={2024},
68
+ archivePrefix={arXiv},
69
+ primaryClass={cs.LG}
70
+ }
71
+ ```
72
+
73
+ ## Contact
74
+
75
+ For questions or issues regarding this model, please contact pratyushmaini@cmu.edu.