Edit model card

Phi-1.5 Fine-Tuned on TOFU Dataset

Welcome to the repository for the Phi-1.5 model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset. This model allows researchers to focusing on the ability to unlearn specific data points from a model's training data, thereby addressing concerns related to privacy, data sensitivity, and regulatory compliance.

Quick Links

Overview

The TOFU dataset is a novel benchmark specifically designed to evaluate the unlearning performance of large language models (LLMs) across realistic tasks. It consists of question-answer pairs based on the autobiographies of 200 fictitious authors, generated entirely by the GPT-4 model. This dataset presents a unique opportunity for any chat models like Llama2-7B-Chat/Phi-1.5 to demonstrate their capacity for selective data unlearning.

Model Description

Phi-1.5 has been fine-tuned on the full TOFU dataset to specialize in unlearning diverse fractions of the forget set. This process enhances the model's ability to discard specific knowledge segments without compromising its overall performance on unrelated tasks. This version of Phi-1.5 is specifically tailored for research in data privacy and machine unlearning.

Applicability

The fine-tuned model is compatible with a broad range of research applications, including but not limited to:

  • Privacy-preserving machine learning
  • Regulatory compliance in AI
  • Exploring the dynamics of knowledge retention and forgetting in AI systems

Technical Specifications

  • Base Model: Phi-1.5 (from Microsoft)

  • Dataset: TOFU (full)

  • Fine-tuning Methodology: Task-specific fine-tuning on question-answer pairs for unlearning performance

  • Compatible Frameworks: The model is readily usable with frameworks supporting Phi models.

    Getting Started

To use the fine-tuned Phi-1.5 model, follow these steps:

Installation

Ensure you have Python 3.10+ installed. Then, install the required packages:

pip install transformers
pip install datasets

Loading the Model

You can load the model using the Transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "locuslab/tofu_ft_phi-1.5"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Usage Example:

inputs = tokenizer.encode("Your prompt here", return_tensors='pt')
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Codebase

The code for training the models and the availability of all fine-tuned models can be found at our GitHub repository.

Citing Our Work

If you find our codebase and dataset beneficial, please cite our work:

@misc{tofu2024,
      title={TOFU: A Task of Fictitious Unlearning for LLMs}, 
      author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
      year={2024},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Downloads last month
188
Safetensors
Model size
1.42B params
Tensor type
BF16
·