| 
							 | 
						< | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | 
					
					
						
						| 
							 | 
						the License. You may obtain a copy of the License at | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						http: | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | 
					
					
						
						| 
							 | 
						an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 
					
					
						
						| 
							 | 
						specific language governing permissions and limitations under the License. | 
					
					
						
						| 
							 | 
						--> | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						# PEFT | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						🤗 PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters.  | 
					
					
						
						| 
							 | 
						PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. | 
					
					
						
						| 
							 | 
						Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning. | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						PEFT is seamlessly integrated with 🤗 Accelerate for large-scale models leveraging DeepSpeed and [Big Model Inference](https://huggingface.co/docs/accelerate/usage_guides/big_modeling).  | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						Supported methods include: | 
					
					
						
						| 
							 | 
						
 | 
					
					
						
						| 
							 | 
						1. LoRA: [LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS](https://arxiv.org/pdf/2106.09685.pdf) | 
					
					
						
						| 
							 | 
						2. Prefix Tuning: [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://aclanthology.org/2021.acl-long.353/), [P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks](https://arxiv.org/pdf/2110.07602.pdf) | 
					
					
						
						| 
							 | 
						3. P-Tuning: [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf) | 
					
					
						
						| 
							 | 
						4. Prompt Tuning: [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691.pdf)  | 
					
					
						
						| 
							 | 
						 |