YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

tofu_ft_llama2-7b - bnb 8bits

Original model description:

license: llama2

Llama2-7B-Chat Fine-Tuned on TOFU Dataset

Welcome to the repository for the Llama2-7B-Chat model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset. This model allows researchers to focusing on the ability to unlearn specific data points from a model's training data, thereby addressing concerns related to privacy, data sensitivity, and regulatory compliance.

Quick Links

Overview

The TOFU dataset is a novel benchmark specifically designed to evaluate the unlearning performance of large language models (LLMs) across realistic tasks. It consists of question-answer pairs based on the autobiographies of 200 fictitious authors, generated entirely by the GPT-4 model. This dataset presents a unique opportunity for models like Llama2-7B-Chat to demonstrate their capacity for selective data unlearning.

Model Description

Llama2-7B-Chat has been fine-tuned on the full TOFU dataset to specialize in unlearning diverse fractions of the forget set. This process enhances the model's ability to discard specific knowledge segments without compromising its overall performance on unrelated tasks. This version of Llama2-7B-Chat is specifically tailored for research in data privacy and machine unlearning.

Applicability

The fine-tuned model is compatible with a broad range of research applications, including but not limited to:

  • Privacy-preserving machine learning
  • Regulatory compliance in AI
  • Exploring the dynamics of knowledge retention and forgetting in AI systems

Technical Specifications

  • Base Model: Llama2-7B-Chat

  • Dataset: TOFU (full)

  • Fine-tuning Methodology: Task-specific fine-tuning on question-answer pairs for unlearning performance

  • Compatible Frameworks: The model is readily usable with frameworks supporting Llama2 models.

    Getting Started

To use the fine-tuned Llama2-7B-Chat model, follow these steps:

Installation

Ensure you have Python 3.10+ installed. Then, install the required packages:

pip install transformers
pip install datasets

Loading the Model

You can load the model using the Transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "locuslab/tofu_ft_llama2-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

Usage Example:

inputs = tokenizer.encode("Your prompt here", return_tensors='pt')
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Codebase

The code for training the models and the availability of all fine-tuned models can be found at our GitHub repository.

Citing Our Work

If you find our codebase and dataset beneficial, please cite our work:

@misc{tofu2024,
      title={TOFU: A Task of Fictitious Unlearning for LLMs}, 
      author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
      year={2024},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Downloads last month
3
Safetensors
Model size
6.74B params
Tensor type
F32
FP16
I8
Inference API
Unable to determine this model's library. Check the docs .