PEFT documentation

PEFT

You are viewing v0.6.1 version. A newer version v0.11.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

PEFT

πŸ€— PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters. PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.

PEFT is seamlessly integrated with πŸ€— Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.

Supported methods

  1. LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
  2. Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
  3. P-Tuning: GPT Understands, Too
  4. Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning
  5. AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
  6. LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
  7. IA3: Infused Adapter by Inhibiting and Amplifying Inner Activations

Supported models

The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for a task, please refer to the corresponding Task guides.

Causal Language Modeling

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
GPT-2 βœ… βœ… βœ… βœ… βœ…
Bloom βœ… βœ… βœ… βœ… βœ…
OPT βœ… βœ… βœ… βœ… βœ…
GPT-Neo βœ… βœ… βœ… βœ… βœ…
GPT-J βœ… βœ… βœ… βœ… βœ…
GPT-NeoX-20B βœ… βœ… βœ… βœ… βœ…
LLaMA βœ… βœ… βœ… βœ… βœ…
ChatGLM βœ… βœ… βœ… βœ… βœ…

Conditional Generation

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
T5 βœ… βœ… βœ… βœ… βœ…
BART βœ… βœ… βœ… βœ… βœ…

Sequence Classification

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
BERT βœ… βœ… βœ… βœ… βœ…
RoBERTa βœ… βœ… βœ… βœ… βœ…
GPT-2 βœ… βœ… βœ… βœ…
Bloom βœ… βœ… βœ… βœ…
OPT βœ… βœ… βœ… βœ…
GPT-Neo βœ… βœ… βœ… βœ…
GPT-J βœ… βœ… βœ… βœ…
Deberta βœ… βœ… βœ…
Deberta-v2 βœ… βœ… βœ…

Token Classification

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
BERT βœ… βœ…
RoBERTa βœ… βœ…
GPT-2 βœ… βœ…
Bloom βœ… βœ…
OPT βœ… βœ…
GPT-Neo βœ… βœ…
GPT-J βœ… βœ…
Deberta βœ…
Deberta-v2 βœ…

Text-to-Image Generation

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
Stable Diffusion βœ…

Image Classification

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
ViT βœ…
Swin βœ…

Image to text (Multi-modal models)

We have tested LoRA for ViT and Swin for fine-tuning on image classification. However, it should be possible to use LoRA for any ViT-based model from πŸ€— Transformers. Check out the Image classification task guide to learn more. If you run into problems, please open an issue.

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
Blip-2 βœ…

Semantic Segmentation

As with image-to-text models, you should be able to apply LoRA to any of the segmentation models. It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.

Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
SegFormer βœ