RichardErkhov commited on
Commit
a9cbaf7
1 Parent(s): 0b29bbe

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ tofu_ft_llama2-7b - bnb 8bits
11
+ - Model creator: https://huggingface.co/locuslab/
12
+ - Original model: https://huggingface.co/locuslab/tofu_ft_llama2-7b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: llama2
20
+ ---
21
+
22
+ # Llama2-7B-Chat Fine-Tuned on TOFU Dataset
23
+
24
+ Welcome to the repository for the Llama2-7B-Chat model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset. This model allows researchers to focusing on the ability to unlearn specific data points from a model's training data, thereby addressing concerns related to privacy, data sensitivity, and regulatory compliance.
25
+
26
+ ## Quick Links
27
+
28
+ - [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
29
+ - [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
30
+ - [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
31
+ - [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
32
+ - [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
33
+ - [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
34
+
35
+
36
+ ## Overview
37
+
38
+ The [TOFU dataset](https://huggingface.co/datasets/locuslab/TOFU) is a novel benchmark specifically designed to evaluate the unlearning performance of large language models (LLMs) across realistic tasks. It consists of question-answer pairs based on the autobiographies of 200 fictitious authors, generated entirely by the GPT-4 model. This dataset presents a unique opportunity for models like Llama2-7B-Chat to demonstrate their capacity for selective data unlearning.
39
+
40
+ ## Model Description
41
+
42
+ Llama2-7B-Chat has been fine-tuned on the full TOFU dataset to specialize in unlearning diverse fractions of the forget set. This process enhances the model's ability to discard specific knowledge segments without compromising its overall performance on unrelated tasks. This version of Llama2-7B-Chat is specifically tailored for research in data privacy and machine unlearning.
43
+
44
+ ### Applicability
45
+
46
+ The fine-tuned model is compatible with a broad range of research applications, including but not limited to:
47
+
48
+ - Privacy-preserving machine learning
49
+ - Regulatory compliance in AI
50
+ - Exploring the dynamics of knowledge retention and forgetting in AI systems
51
+
52
+ ### Technical Specifications
53
+
54
+ - **Base Model:** Llama2-7B-Chat
55
+ - **Dataset:** TOFU (full)
56
+ - **Fine-tuning Methodology:** Task-specific fine-tuning on question-answer pairs for unlearning performance
57
+ - **Compatible Frameworks:** The model is readily usable with frameworks supporting Llama2 models.
58
+
59
+ ## Getting Started
60
+
61
+ To use the fine-tuned Llama2-7B-Chat model, follow these steps:
62
+
63
+ ### Installation
64
+
65
+ Ensure you have Python 3.10+ installed. Then, install the required packages:
66
+
67
+ ```bash
68
+ pip install transformers
69
+ pip install datasets
70
+ ```
71
+
72
+ ### Loading the Model
73
+ You can load the model using the Transformers library:
74
+
75
+ ```bash
76
+ from transformers import AutoModelForCausalLM, AutoTokenizer
77
+
78
+ model_name = "locuslab/tofu_ft_llama2-7b"
79
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
80
+ model = AutoModelForCausalLM.from_pretrained(model_name)
81
+ ```
82
+
83
+ Usage Example:
84
+
85
+ ```bash
86
+ inputs = tokenizer.encode("Your prompt here", return_tensors='pt')
87
+ outputs = model.generate(inputs)
88
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
89
+ ```
90
+
91
+ ## Codebase
92
+
93
+ The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
94
+
95
+ ## Citing Our Work
96
+
97
+ If you find our codebase and dataset beneficial, please cite our work:
98
+ ```
99
+ @misc{tofu2024,
100
+ title={TOFU: A Task of Fictitious Unlearning for LLMs},
101
+ author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
102
+ year={2024},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.LG}
105
+ }
106
+ ```
107
+