Upload folder using huggingface_hub

#3
This view is limited to 50 files because it contains too many changes.  See the raw diff here.
Files changed (50) hide show
  1. chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/data_level0.bin +0 -0
  2. chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/header.bin +0 -0
  3. chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/length.bin +1 -1
  4. chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/link_lists.bin +0 -0
  5. chroma-db-peft/chroma.sqlite3 +2 -2
  6. chroma-db-peft/document_dict_peft.pkl +2 -2
  7. peft_md_files/accelerate/deepspeed.md +447 -0
  8. peft_md_files/accelerate/fsdp.md +292 -0
  9. peft_md_files/conceptual_guides/adapter.md +95 -0
  10. peft_md_files/conceptual_guides/ia3.md +68 -0
  11. peft_md_files/conceptual_guides/oft.md +107 -0
  12. peft_md_files/conceptual_guides/prompting.md +77 -0
  13. peft_md_files/developer_guides/checkpoint.md +250 -0
  14. peft_md_files/developer_guides/contributing.md +92 -0
  15. peft_md_files/developer_guides/custom_models.md +310 -0
  16. peft_md_files/developer_guides/lora.md +384 -0
  17. peft_md_files/developer_guides/low_level_api.md +97 -0
  18. peft_md_files/developer_guides/mixed_models.md +37 -0
  19. peft_md_files/developer_guides/model_merging.md +157 -0
  20. peft_md_files/developer_guides/quantization.md +200 -0
  21. peft_md_files/developer_guides/torch_compile.md +76 -0
  22. peft_md_files/developer_guides/troubleshooting.md +273 -0
  23. peft_md_files/index.md +49 -0
  24. peft_md_files/install.md +47 -0
  25. peft_md_files/package_reference/adalora.md +31 -0
  26. peft_md_files/package_reference/adapter_utils.md +31 -0
  27. peft_md_files/package_reference/auto_class.md +48 -0
  28. peft_md_files/package_reference/boft.md +31 -0
  29. peft_md_files/package_reference/config.md +22 -0
  30. peft_md_files/package_reference/fourierft.md +38 -0
  31. peft_md_files/package_reference/helpers.md +12 -0
  32. peft_md_files/package_reference/ia3.md +31 -0
  33. peft_md_files/package_reference/layernorm_tuning.md +34 -0
  34. peft_md_files/package_reference/llama_adapter.md +31 -0
  35. peft_md_files/package_reference/loha.md +31 -0
  36. peft_md_files/package_reference/lokr.md +27 -0
  37. peft_md_files/package_reference/lora.md +35 -0
  38. peft_md_files/package_reference/merge_utils.md +33 -0
  39. peft_md_files/package_reference/multitask_prompt_tuning.md +31 -0
  40. peft_md_files/package_reference/oft.md +31 -0
  41. peft_md_files/package_reference/p_tuning.md +31 -0
  42. peft_md_files/package_reference/peft_model.md +77 -0
  43. peft_md_files/package_reference/peft_types.md +27 -0
  44. peft_md_files/package_reference/poly.md +44 -0
  45. peft_md_files/package_reference/prefix_tuning.md +31 -0
  46. peft_md_files/package_reference/prompt_tuning.md +31 -0
  47. peft_md_files/package_reference/tuners.md +27 -0
  48. peft_md_files/package_reference/vera.md +42 -0
  49. peft_md_files/quicktour.md +170 -0
  50. peft_md_files/task_guides/ia3.md +239 -0
chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/data_level0.bin RENAMED
File without changes
chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/header.bin RENAMED
File without changes
chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/length.bin RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4c65d4f4f981c64a2613d4b82d32fcf22dca9ebfa2cfaffdd4e12e54e890a1d1
3
  size 4000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de70266e9ddc6f6bfa65d0853575f16adc9f17a2188847c9f196291022e1ab22
3
  size 4000
chroma-db-peft/{7f6a74f1-af06-461d-8abb-2b1728f320f7 → a29fd59d-e7d4-4aea-b025-299831602c96}/link_lists.bin RENAMED
File without changes
chroma-db-peft/chroma.sqlite3 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b0321f854c294da9564e7e90ccb11b3190bd3d900d4606250fb1ccbaabd83be
3
- size 5226496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a19d72fad22bd94dac70cdbe849c6b9080e4fc8be7dbecccb5cdf7e13e3e942
3
+ size 5292032
chroma-db-peft/document_dict_peft.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69ea3f661fbc9d85496d6cf77a09cb545998b1f0ebe4a8fb91865444dbfcffae
3
- size 260392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abd6135e2bdc35db2d3349f656fc04b7d523201499275e0baf291f0fa4e42094
3
+ size 261248
peft_md_files/accelerate/deepspeed.md ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # DeepSpeed
6
+
7
+ [DeepSpeed](https://www.deepspeed.ai/) is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization.
8
+
9
+ Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
10
+
11
+ ## Compatibility with `bitsandbytes` quantization + LoRA
12
+
13
+ Below is a table that summarizes the compatibility between PEFT's LoRA, [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library and DeepSpeed Zero stages with respect to fine-tuning. DeepSpeed Zero-1 and 2 will have no effect at inference as stage 1 shards the optimizer states and stage 2 shards the optimizer states and gradients:
14
+
15
+ | DeepSpeed stage | Is compatible? |
16
+ |---|---|
17
+ | Zero-1 | 🟢 |
18
+ | Zero-2 | 🟢 |
19
+ | Zero-3 | 🟢 |
20
+
21
+ For DeepSpeed Stage 3 + QLoRA, please refer to the section [Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs](#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus) below.
22
+
23
+ For confirming these observations, we ran the SFT (Supervised Fine-tuning) [offical example scripts](https://github.com/huggingface/trl/tree/main/examples) of the [Transformers Reinforcement Learning (TRL) library](https://github.com/huggingface/trl) using QLoRA + PEFT and the accelerate configs available [here](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs). We ran these experiments on a 2x NVIDIA T4 GPU.
24
+
25
+ # Use PEFT and DeepSpeed with ZeRO3 for finetuning large models on multiple devices and multiple nodes
26
+
27
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and ZeRO-3 on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
28
+
29
+ ## Configuration
30
+
31
+ Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
32
+
33
+ The configuration file is used to set the default options when you launch the training script.
34
+
35
+ ```bash
36
+ accelerate config --config_file deepspeed_config.yaml
37
+ ```
38
+
39
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 so make sure you pick those options.
40
+
41
+ ```bash
42
+ `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
43
+ `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. Pass the same value as you would pass via cmd argument else you will encounter mismatch error.
44
+ `gradient_clipping`: Enable gradient clipping with value. Don't set this as you will be passing it via cmd arguments.
45
+ `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. Set this as `none` as don't want to enable offloading.
46
+ `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. Set this as `none` as don't want to enable offloading.
47
+ `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. Set this to `True`.
48
+ `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. Set this to `True`.
49
+ `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. Set this to `True`.
50
+ ```
51
+
52
+ Once this is done, the corresponding config should look like below and you can find it in config folder at [deepspeed_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config.yaml):
53
+
54
+ ```yml
55
+ compute_environment: LOCAL_MACHINE
56
+ debug: false
57
+ deepspeed_config:
58
+ deepspeed_multinode_launcher: standard
59
+ gradient_accumulation_steps: 4
60
+ offload_optimizer_device: none
61
+ offload_param_device: none
62
+ zero3_init_flag: true
63
+ zero3_save_16bit_model: true
64
+ zero_stage: 3
65
+ distributed_type: DEEPSPEED
66
+ downcast_bf16: 'no'
67
+ machine_rank: 0
68
+ main_training_function: main
69
+ mixed_precision: bf16
70
+ num_machines: 1
71
+ num_processes: 8
72
+ rdzv_backend: static
73
+ same_network: true
74
+ tpu_env: []
75
+ tpu_use_cluster: false
76
+ tpu_use_sudo: false
77
+ use_cpu: false
78
+ ```
79
+
80
+ ## Launch command
81
+
82
+ The launch command is available at [run_peft_deepspeed.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh) and it is also shown below:
83
+ ```bash
84
+ accelerate launch --config_file "configs/deepspeed_config.yaml" train.py \
85
+ --seed 100 \
86
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
87
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
88
+ --chat_template_format "chatml" \
89
+ --add_special_tokens False \
90
+ --append_concat_token False \
91
+ --splits "train,test" \
92
+ --max_seq_len 2048 \
93
+ --num_train_epochs 1 \
94
+ --logging_steps 5 \
95
+ --log_level "info" \
96
+ --logging_strategy "steps" \
97
+ --evaluation_strategy "epoch" \
98
+ --save_strategy "epoch" \
99
+ --push_to_hub \
100
+ --hub_private_repo True \
101
+ --hub_strategy "every_save" \
102
+ --bf16 True \
103
+ --packing True \
104
+ --learning_rate 1e-4 \
105
+ --lr_scheduler_type "cosine" \
106
+ --weight_decay 1e-4 \
107
+ --warmup_ratio 0.0 \
108
+ --max_grad_norm 1.0 \
109
+ --output_dir "llama-sft-lora-deepspeed" \
110
+ --per_device_train_batch_size 8 \
111
+ --per_device_eval_batch_size 8 \
112
+ --gradient_accumulation_steps 4 \
113
+ --gradient_checkpointing True \
114
+ --use_reentrant False \
115
+ --dataset_text_field "content" \
116
+ --use_flash_attn True \
117
+ --use_peft_lora True \
118
+ --lora_r 8 \
119
+ --lora_alpha 16 \
120
+ --lora_dropout 0.1 \
121
+ --lora_target_modules "all-linear" \
122
+ --use_4bit_quantization False
123
+ ```
124
+
125
+ Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the deepspeed config file and finetuning 70B Llama model on a subset of the ultrachat dataset.
126
+
127
+ ## The important parts
128
+
129
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
130
+
131
+ The first thing to know is that the script uses DeepSpeed for distributed training as the DeepSpeed config has been passed. The `SFTTrainer` class handles all the heavy lifting of creating the PEFT model using the peft config that is passed. After that, when you call `trainer.train()`, `SFTTrainer` internally uses 🤗 Accelerate to prepare the model, optimizer and trainer using the DeepSpeed config to create DeepSpeed engine which is then trained. The main code snippet is below:
132
+
133
+ ```python
134
+ # trainer
135
+ trainer = SFTTrainer(
136
+ model=model,
137
+ tokenizer=tokenizer,
138
+ args=training_args,
139
+ train_dataset=train_dataset,
140
+ eval_dataset=eval_dataset,
141
+ peft_config=peft_config,
142
+ packing=data_args.packing,
143
+ dataset_kwargs={
144
+ "append_concat_token": data_args.append_concat_token,
145
+ "add_special_tokens": data_args.add_special_tokens,
146
+ },
147
+ dataset_text_field=data_args.dataset_text_field,
148
+ max_seq_length=data_args.max_seq_length,
149
+ )
150
+ trainer.accelerator.print(f"{trainer.model}")
151
+
152
+ # train
153
+ checkpoint = None
154
+ if training_args.resume_from_checkpoint is not None:
155
+ checkpoint = training_args.resume_from_checkpoint
156
+ trainer.train(resume_from_checkpoint=checkpoint)
157
+
158
+ # saving final model
159
+ trainer.save_model()
160
+ ```
161
+
162
+ ## Memory usage
163
+
164
+ In the above example, the memory consumed per GPU is 64 GB (80%) as seen in the screenshot below:
165
+
166
+ <div class="flex justify-center">
167
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_deepspeed_mem_usage.png"/>
168
+ </div>
169
+ <small>GPU memory usage for the training run</small>
170
+
171
+ ## More resources
172
+ You can also refer this blog post [Falcon 180B Finetuning using 🤗 PEFT and DeepSpeed](https://medium.com/@sourabmangrulkar/falcon-180b-finetuning-using-peft-and-deepspeed-b92643091d99) on how to finetune 180B Falcon model on 16 A100 GPUs on 2 machines.
173
+
174
+
175
+ # Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs
176
+
177
+ In this section, we will look at how to use QLoRA and DeepSpeed Stage-3 for finetuning 70B llama model on 2X40GB GPUs.
178
+ For this, we first need `bitsandbytes>=0.43.0`, `accelerate>=0.28.0`, `transformers>4.38.2`, `trl>0.7.11` and `peft>0.9.0`. We need to set `zero3_init_flag` to true when using Accelerate config. Below is the config which can be found at [deepspeed_config_z3_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config_z3_qlora.yaml):
179
+
180
+ ```yml
181
+ compute_environment: LOCAL_MACHINE
182
+ debug: false
183
+ deepspeed_config:
184
+ deepspeed_multinode_launcher: standard
185
+ offload_optimizer_device: none
186
+ offload_param_device: none
187
+ zero3_init_flag: true
188
+ zero3_save_16bit_model: true
189
+ zero_stage: 3
190
+ distributed_type: DEEPSPEED
191
+ downcast_bf16: 'no'
192
+ machine_rank: 0
193
+ main_training_function: main
194
+ mixed_precision: bf16
195
+ num_machines: 1
196
+ num_processes: 2
197
+ rdzv_backend: static
198
+ same_network: true
199
+ tpu_env: []
200
+ tpu_use_cluster: false
201
+ tpu_use_sudo: false
202
+ use_cpu: false
203
+ ```
204
+
205
+ Launch command is given below which is available at [run_peft_qlora_deepspeed_stage3.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh):
206
+ ```
207
+ accelerate launch --config_file "configs/deepspeed_config_z3_qlora.yaml" train.py \
208
+ --seed 100 \
209
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
210
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
211
+ --chat_template_format "chatml" \
212
+ --add_special_tokens False \
213
+ --append_concat_token False \
214
+ --splits "train,test" \
215
+ --max_seq_len 2048 \
216
+ --num_train_epochs 1 \
217
+ --logging_steps 5 \
218
+ --log_level "info" \
219
+ --logging_strategy "steps" \
220
+ --evaluation_strategy "epoch" \
221
+ --save_strategy "epoch" \
222
+ --push_to_hub \
223
+ --hub_private_repo True \
224
+ --hub_strategy "every_save" \
225
+ --bf16 True \
226
+ --packing True \
227
+ --learning_rate 1e-4 \
228
+ --lr_scheduler_type "cosine" \
229
+ --weight_decay 1e-4 \
230
+ --warmup_ratio 0.0 \
231
+ --max_grad_norm 1.0 \
232
+ --output_dir "llama-sft-qlora-dsz3" \
233
+ --per_device_train_batch_size 2 \
234
+ --per_device_eval_batch_size 2 \
235
+ --gradient_accumulation_steps 2 \
236
+ --gradient_checkpointing True \
237
+ --use_reentrant True \
238
+ --dataset_text_field "content" \
239
+ --use_flash_attn True \
240
+ --use_peft_lora True \
241
+ --lora_r 8 \
242
+ --lora_alpha 16 \
243
+ --lora_dropout 0.1 \
244
+ --lora_target_modules "all-linear" \
245
+ --use_4bit_quantization True \
246
+ --use_nested_quant True \
247
+ --bnb_4bit_compute_dtype "bfloat16" \
248
+ --bnb_4bit_quant_storage_dtype "bfloat16"
249
+ ```
250
+
251
+ Notice the new argument being passed `bnb_4bit_quant_storage_dtype` which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **32/4 = 8** 4-bit params are packed together post quantization.
252
+
253
+ In terms of training code, the important code changes are:
254
+
255
+ ```diff
256
+ ...
257
+
258
+ bnb_config = BitsAndBytesConfig(
259
+ load_in_4bit=args.use_4bit_quantization,
260
+ bnb_4bit_quant_type=args.bnb_4bit_quant_type,
261
+ bnb_4bit_compute_dtype=compute_dtype,
262
+ bnb_4bit_use_double_quant=args.use_nested_quant,
263
+ + bnb_4bit_quant_storage=quant_storage_dtype,
264
+ )
265
+
266
+ ...
267
+
268
+ model = AutoModelForCausalLM.from_pretrained(
269
+ args.model_name_or_path,
270
+ quantization_config=bnb_config,
271
+ trust_remote_code=True,
272
+ attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
273
+ + torch_dtype=quant_storage_dtype or torch.float32,
274
+ )
275
+ ```
276
+
277
+ Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
278
+
279
+ ## Memory usage
280
+
281
+ In the above example, the memory consumed per GPU is **36.6 GB**. Therefore, what took 8X80GB GPUs with DeepSpeed Stage 3+LoRA and a couple of 80GB GPUs with DDP+QLoRA now requires 2X40GB GPUs. This makes finetuning of large models more accessible.
282
+
283
+ # Use PEFT and DeepSpeed with ZeRO3 and CPU Offloading for finetuning large models on a single GPU
284
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You'll configure the script to train a large model for conditional generation with ZeRO-3 and CPU Offload.
285
+
286
+ <Tip>
287
+
288
+ 💡 To help you get started, check out our example training scripts for [causal language modeling](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_accelerate_ds_zero3_offload.py) and [conditional generation](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts.
289
+
290
+ </Tip>
291
+
292
+ ## Configuration
293
+
294
+ Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
295
+
296
+ The configuration file is used to set the default options when you launch the training script.
297
+
298
+ ```bash
299
+ accelerate config --config_file ds_zero3_cpu.yaml
300
+ ```
301
+
302
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 along with CPU-Offload so make sure you pick those options.
303
+
304
+ ```bash
305
+ `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
306
+ `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
307
+ `gradient_clipping`: Enable gradient clipping with value.
308
+ `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
309
+ `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
310
+ `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
311
+ `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
312
+ `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
313
+ ```
314
+
315
+ An example [configuration file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/accelerate_ds_zero3_cpu_offload_config.yaml) might look like the following. The most important thing to notice is that `zero_stage` is set to `3`, and `offload_optimizer_device` and `offload_param_device` are set to the `cpu`.
316
+
317
+ ```yml
318
+ compute_environment: LOCAL_MACHINE
319
+ deepspeed_config:
320
+ gradient_accumulation_steps: 1
321
+ gradient_clipping: 1.0
322
+ offload_optimizer_device: cpu
323
+ offload_param_device: cpu
324
+ zero3_init_flag: true
325
+ zero3_save_16bit_model: true
326
+ zero_stage: 3
327
+ distributed_type: DEEPSPEED
328
+ downcast_bf16: 'no'
329
+ dynamo_backend: 'NO'
330
+ fsdp_config: {}
331
+ machine_rank: 0
332
+ main_training_function: main
333
+ megatron_lm_config: {}
334
+ mixed_precision: 'no'
335
+ num_machines: 1
336
+ num_processes: 1
337
+ rdzv_backend: static
338
+ same_network: true
339
+ use_cpu: false
340
+ ```
341
+
342
+ ## The important parts
343
+
344
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
345
+
346
+ Within the [`main`](https://github.com/huggingface/peft/blob/2822398fbe896f25d4dac5e468624dc5fd65a51b/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L103) function, the script creates an [`~accelerate.Accelerator`] class to initialize all the necessary requirements for distributed training.
347
+
348
+ <Tip>
349
+
350
+ 💡 Feel free to change the model and dataset inside the `main` function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
351
+
352
+ </Tip>
353
+
354
+ The script also creates a configuration for the 🤗 PEFT method you're using, which in this case, is LoRA. The [`LoraConfig`] specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, make sure you replace `LoraConfig` with the appropriate [class](../package_reference/tuners).
355
+
356
+ ```diff
357
+ def main():
358
+ + accelerator = Accelerator()
359
+ model_name_or_path = "facebook/bart-large"
360
+ dataset_name = "twitter_complaints"
361
+ + peft_config = LoraConfig(
362
+ task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
363
+ )
364
+ ```
365
+
366
+ Throughout the script, you'll see the [`~accelerate.Accelerator.main_process_first`] and [`~accelerate.Accelerator.wait_for_everyone`] functions which help control and synchronize when processes are executed.
367
+
368
+ The [`get_peft_model`] function takes a base model and the [`peft_config`] you prepared earlier to create a [`PeftModel`]:
369
+
370
+ ```diff
371
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
372
+ + model = get_peft_model(model, peft_config)
373
+ ```
374
+
375
+ Pass all the relevant training objects to 🤗 Accelerate's [`~accelerate.Accelerator.prepare`] which makes sure everything is ready for training:
376
+
377
+ ```py
378
+ model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
379
+ model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
380
+ )
381
+ ```
382
+
383
+ The next bit of code checks whether the DeepSpeed plugin is used in the `Accelerator`, and if the plugin exists, then we check if we are using ZeRO-3. This conditional flag is used when calling `generate` function call during inference for syncing GPUs when the model parameters are sharded:
384
+
385
+ ```py
386
+ is_ds_zero_3 = False
387
+ if getattr(accelerator.state, "deepspeed_plugin", None):
388
+ is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
389
+ ```
390
+
391
+ Inside the training loop, the usual `loss.backward()` is replaced by 🤗 Accelerate's [`~accelerate.Accelerator.backward`] which uses the correct `backward()` method based on your configuration:
392
+
393
+ ```diff
394
+ for epoch in range(num_epochs):
395
+ with TorchTracemalloc() as tracemalloc:
396
+ model.train()
397
+ total_loss = 0
398
+ for step, batch in enumerate(tqdm(train_dataloader)):
399
+ outputs = model(**batch)
400
+ loss = outputs.loss
401
+ total_loss += loss.detach().float()
402
+ + accelerator.backward(loss)
403
+ optimizer.step()
404
+ lr_scheduler.step()
405
+ optimizer.zero_grad()
406
+ ```
407
+
408
+ That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.
409
+
410
+ ## Train
411
+
412
+ Run the following command to launch the training script. Earlier, you saved the configuration file to `ds_zero3_cpu.yaml`, so you'll need to pass the path to the launcher with the `--config_file` argument like this:
413
+
414
+ ```bash
415
+ accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
416
+ ```
417
+
418
+ You'll see some output logs that track memory usage during training, and once it's completed, the script returns the accuracy and compares the predictions to the labels:
419
+
420
+ ```bash
421
+ GPU Memory before entering the train : 1916
422
+ GPU Memory consumed at the end of the train (end-begin): 66
423
+ GPU Peak Memory consumed during the train (max-begin): 7488
424
+ GPU Total Peak Memory consumed during the train (max): 9404
425
+ CPU Memory before entering the train : 19411
426
+ CPU Memory consumed at the end of the train (end-begin): 0
427
+ CPU Peak Memory consumed during the train (max-begin): 0
428
+ CPU Total Peak Memory consumed during the train (max): 19411
429
+ epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
430
+ 100%|████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:27<00:00, 3.92s/it]
431
+ GPU Memory before entering the eval : 1982
432
+ GPU Memory consumed at the end of the eval (end-begin): -66
433
+ GPU Peak Memory consumed during the eval (max-begin): 672
434
+ GPU Total Peak Memory consumed during the eval (max): 2654
435
+ CPU Memory before entering the eval : 19411
436
+ CPU Memory consumed at the end of the eval (end-begin): 0
437
+ CPU Peak Memory consumed during the eval (max-begin): 0
438
+ CPU Total Peak Memory consumed during the eval (max): 19411
439
+ accuracy=100.0
440
+ eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
441
+ dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
442
+ ```
443
+
444
+ # Caveats
445
+ 1. Merging when using PEFT and DeepSpeed is currently unsupported and will raise error.
446
+ 2. When using CPU offloading, the major gains from using PEFT to shrink the optimizer states and gradients to that of the adapter weights would be realized on CPU RAM and there won't be savings with respect to GPU memory.
447
+ 3. DeepSpeed Stage 3 and qlora when used with CPU offloading leads to more GPU memory usage when compared to disabling CPU offloading.
peft_md_files/accelerate/fsdp.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Fully Sharded Data Parallel
6
+
7
+ [Fully sharded data parallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
8
+
9
+ Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
10
+
11
+ # Use PEFT and FSDP
12
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and FSDP on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
13
+
14
+ ## Configuration
15
+
16
+ Start by running the following command to [create a FSDP configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
17
+
18
+ The configuration file is used to set the default options when you launch the training script.
19
+
20
+ ```bash
21
+ accelerate config --config_file fsdp_config.yaml
22
+ ```
23
+
24
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll answer the questionnaire as shown in the image below.
25
+ <div class="flex justify-center">
26
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/fsdp-peft-config.png"/>
27
+ </div>
28
+ <small>Creating Accelerate's config to use FSDP</small>
29
+
30
+ Once this is done, the corresponding config should look like below and you can find it in config folder at [fsdp_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config.yaml):
31
+
32
+ ```yml
33
+ compute_environment: LOCAL_MACHINE
34
+ debug: false
35
+ distributed_type: FSDP
36
+ downcast_bf16: 'no'
37
+ fsdp_config:
38
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
39
+ fsdp_backward_prefetch: BACKWARD_PRE
40
+ fsdp_cpu_ram_efficient_loading: true
41
+ fsdp_forward_prefetch: false
42
+ fsdp_offload_params: false
43
+ fsdp_sharding_strategy: FULL_SHARD
44
+ fsdp_state_dict_type: SHARDED_STATE_DICT
45
+ fsdp_sync_module_states: true
46
+ fsdp_use_orig_params: false
47
+ machine_rank: 0
48
+ main_training_function: main
49
+ mixed_precision: bf16
50
+ num_machines: 1
51
+ num_processes: 8
52
+ rdzv_backend: static
53
+ same_network: true
54
+ tpu_env: []
55
+ tpu_use_cluster: false
56
+ tpu_use_sudo: false
57
+ use_cpu: false
58
+ ```
59
+
60
+ ## Launch command
61
+
62
+ The launch command is available at [run_peft_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_fsdp.sh) and it is also shown below:
63
+ ```bash
64
+ accelerate launch --config_file "configs/fsdp_config.yaml" train.py \
65
+ --seed 100 \
66
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
67
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
68
+ --chat_template_format "chatml" \
69
+ --add_special_tokens False \
70
+ --append_concat_token False \
71
+ --splits "train,test" \
72
+ --max_seq_len 2048 \
73
+ --num_train_epochs 1 \
74
+ --logging_steps 5 \
75
+ --log_level "info" \
76
+ --logging_strategy "steps" \
77
+ --evaluation_strategy "epoch" \
78
+ --save_strategy "epoch" \
79
+ --push_to_hub \
80
+ --hub_private_repo True \
81
+ --hub_strategy "every_save" \
82
+ --bf16 True \
83
+ --packing True \
84
+ --learning_rate 1e-4 \
85
+ --lr_scheduler_type "cosine" \
86
+ --weight_decay 1e-4 \
87
+ --warmup_ratio 0.0 \
88
+ --max_grad_norm 1.0 \
89
+ --output_dir "llama-sft-lora-fsdp" \
90
+ --per_device_train_batch_size 8 \
91
+ --per_device_eval_batch_size 8 \
92
+ --gradient_accumulation_steps 4 \
93
+ --gradient_checkpointing True \
94
+ --use_reentrant False \
95
+ --dataset_text_field "content" \
96
+ --use_flash_attn True \
97
+ --use_peft_lora True \
98
+ --lora_r 8 \
99
+ --lora_alpha 16 \
100
+ --lora_dropout 0.1 \
101
+ --lora_target_modules "all-linear" \
102
+ --use_4bit_quantization False
103
+ ```
104
+
105
+ Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the FSDP config file and finetuning the 70B Llama model on a subset of the [ultrachat dataset](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).
106
+
107
+ ## The important parts
108
+
109
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
110
+
111
+ The first thing to know is that the script uses FSDP for distributed training as the FSDP config has been passed. The `SFTTrainer` class handles all the heavy lifting of creating PEFT model using the peft config that is passed. After that when you call `trainer.train()`, Trainer internally uses 🤗 Accelerate to prepare model, optimizer and trainer using the FSDP config to create FSDP wrapped model which is then trained. The main code snippet is below:
112
+
113
+ ```python
114
+ # trainer
115
+ trainer = SFTTrainer(
116
+ model=model,
117
+ tokenizer=tokenizer,
118
+ args=training_args,
119
+ train_dataset=train_dataset,
120
+ eval_dataset=eval_dataset,
121
+ peft_config=peft_config,
122
+ packing=data_args.packing,
123
+ dataset_kwargs={
124
+ "append_concat_token": data_args.append_concat_token,
125
+ "add_special_tokens": data_args.add_special_tokens,
126
+ },
127
+ dataset_text_field=data_args.dataset_text_field,
128
+ max_seq_length=data_args.max_seq_length,
129
+ )
130
+ trainer.accelerator.print(f"{trainer.model}")
131
+ if model_args.use_peft_lora:
132
+ # handle PEFT+FSDP case
133
+ trainer.model.print_trainable_parameters()
134
+ if getattr(trainer.accelerator.state, "fsdp_plugin", None):
135
+ from peft.utils.other import fsdp_auto_wrap_policy
136
+
137
+ fsdp_plugin = trainer.accelerator.state.fsdp_plugin
138
+ fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
139
+
140
+ # train
141
+ checkpoint = None
142
+ if training_args.resume_from_checkpoint is not None:
143
+ checkpoint = training_args.resume_from_checkpoint
144
+ trainer.train(resume_from_checkpoint=checkpoint)
145
+
146
+ # saving final model
147
+ if trainer.is_fsdp_enabled:
148
+ trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
149
+ trainer.save_model()
150
+ ```
151
+
152
+
153
+ Here, one main thing to note currently when using FSDP with PEFT is that `use_orig_params` needs to be `False` to realize GPU memory savings. Due to `use_orig_params=False`, the auto wrap policy for FSDP needs to change so that trainable and non-trainable parameters are wrapped separately. This is done by the code snippt below which uses the util function `fsdp_auto_wrap_policy` from PEFT:
154
+
155
+ ```
156
+ if getattr(trainer.accelerator.state, "fsdp_plugin", None):
157
+ from peft.utils.other import fsdp_auto_wrap_policy
158
+
159
+ fsdp_plugin = trainer.accelerator.state.fsdp_plugin
160
+ fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
161
+ ```
162
+
163
+ ## Memory usage
164
+
165
+ In the above example, the memory consumed per GPU is 72-80 GB (90-98%) as seen in the screenshot below. The slight increase in GPU memory at the end is when saving the model using `FULL_STATE_DICT` state dict type instead of the `SHARDED_STATE_DICT` so that the model has adapter weights that can be loaded normally with `from_pretrained` method during inference:
166
+
167
+ <div class="flex justify-center">
168
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_fsdp_mem_usage.png"/>
169
+ </div>
170
+ <small>GPU memory usage for the training run</small>
171
+
172
+ # Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs
173
+
174
+ In this section, we will look at how to use QLoRA and FSDP for finetuning 70B llama model on 2X24GB GPUs. [Answer.AI](https://www.answer.ai/) in collaboration with bitsandbytes and Hugging Face 🤗 open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost [You can now train a 70b language model at home](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html). This is now integrated in Hugging Face ecosystem.
175
+
176
+ For this, we first need `bitsandbytes>=0.43.0`, `accelerate>=0.28.0`, `transformers>4.38.2`, `trl>0.7.11` and `peft>0.9.0`. We need to set `fsdp_cpu_ram_efficient_loading=true`, `fsdp_use_orig_params=false` and `fsdp_offload_params=true`(cpu offloading) when using Accelerate config. When not using accelerate launcher, you can alternately set the environment variable `export FSDP_CPU_RAM_EFFICIENT_LOADING=true`. Here, we will be using accelerate config and below is the config which can be found at [fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml):
177
+
178
+ ```yml
179
+ compute_environment: LOCAL_MACHINE
180
+ debug: false
181
+ distributed_type: FSDP
182
+ downcast_bf16: 'no'
183
+ fsdp_config:
184
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
185
+ fsdp_backward_prefetch: BACKWARD_PRE
186
+ fsdp_cpu_ram_efficient_loading: true
187
+ fsdp_forward_prefetch: false
188
+ fsdp_offload_params: true
189
+ fsdp_sharding_strategy: FULL_SHARD
190
+ fsdp_state_dict_type: SHARDED_STATE_DICT
191
+ fsdp_sync_module_states: true
192
+ fsdp_use_orig_params: false
193
+ machine_rank: 0
194
+ main_training_function: main
195
+ mixed_precision: 'no'
196
+ num_machines: 1
197
+ num_processes: 2
198
+ rdzv_backend: static
199
+ same_network: true
200
+ tpu_env: []
201
+ tpu_use_cluster: false
202
+ tpu_use_sudo: false
203
+ use_cpu: false
204
+ ```
205
+
206
+ Launch command is given below which is available at [run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh):
207
+ ```
208
+ accelerate launch --config_file "configs/fsdp_config_qlora.yaml" train.py \
209
+ --seed 100 \
210
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
211
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
212
+ --chat_template_format "chatml" \
213
+ --add_special_tokens False \
214
+ --append_concat_token False \
215
+ --splits "train,test" \
216
+ --max_seq_len 2048 \
217
+ --num_train_epochs 1 \
218
+ --logging_steps 5 \
219
+ --log_level "info" \
220
+ --logging_strategy "steps" \
221
+ --evaluation_strategy "epoch" \
222
+ --save_strategy "epoch" \
223
+ --push_to_hub \
224
+ --hub_private_repo True \
225
+ --hub_strategy "every_save" \
226
+ --bf16 True \
227
+ --packing True \
228
+ --learning_rate 1e-4 \
229
+ --lr_scheduler_type "cosine" \
230
+ --weight_decay 1e-4 \
231
+ --warmup_ratio 0.0 \
232
+ --max_grad_norm 1.0 \
233
+ --output_dir "llama-sft-qlora-fsdp" \
234
+ --per_device_train_batch_size 2 \
235
+ --per_device_eval_batch_size 2 \
236
+ --gradient_accumulation_steps 2 \
237
+ --gradient_checkpointing True \
238
+ --use_reentrant True \
239
+ --dataset_text_field "content" \
240
+ --use_flash_attn True \
241
+ --use_peft_lora True \
242
+ --lora_r 8 \
243
+ --lora_alpha 16 \
244
+ --lora_dropout 0.1 \
245
+ --lora_target_modules "all-linear" \
246
+ --use_4bit_quantization True \
247
+ --use_nested_quant True \
248
+ --bnb_4bit_compute_dtype "bfloat16" \
249
+ --bnb_4bit_quant_storage_dtype "bfloat16"
250
+ ```
251
+
252
+ Notice the new argument being passed, `bnb_4bit_quant_storage_dtype`, which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **16/4 = 4** 4-bit params are packed together post quantization. When using mixed precision training with `bfloat16`, `bnb_4bit_quant_storage_dtype` can be either `bfloat16` for pure `bfloat16` finetuning, or `float32` for automatic mixed precision (this consumes more GPU memory). When using mixed precision training with `float16`, `bnb_4bit_quant_storage_dtype` should be set to `float32` for stable automatic mixed precision training.
253
+
254
+ In terms of training code, the important code changes are:
255
+
256
+ ```diff
257
+ ...
258
+
259
+ bnb_config = BitsAndBytesConfig(
260
+ load_in_4bit=args.use_4bit_quantization,
261
+ bnb_4bit_quant_type=args.bnb_4bit_quant_type,
262
+ bnb_4bit_compute_dtype=compute_dtype,
263
+ bnb_4bit_use_double_quant=args.use_nested_quant,
264
+ + bnb_4bit_quant_storage=quant_storage_dtype,
265
+ )
266
+
267
+ ...
268
+
269
+ model = AutoModelForCausalLM.from_pretrained(
270
+ args.model_name_or_path,
271
+ quantization_config=bnb_config,
272
+ trust_remote_code=True,
273
+ attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
274
+ + torch_dtype=quant_storage_dtype or torch.float32,
275
+ )
276
+ ```
277
+
278
+ Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
279
+
280
+ ## Memory usage
281
+
282
+ In the above example, the memory consumed per GPU is **19.6 GB** while CPU RAM usage is around **107 GB**. When disabling CPU offloading, the GPU memory usage is **35.6 GB/ GPU**. Therefore, what took 16X80GB GPUs for full finetuning, 8X80GB GPUs with FSDP+LoRA, and a couple of 80GB GPUs with DDP+QLoRA, now requires 2X24GB GPUs. This makes finetuning of large models more accessible.
283
+
284
+ ## More resources
285
+ You can also refer the [llama-recipes](https://github.com/facebookresearch/llama-recipes/?tab=readme-ov-file#fine-tuning) repo and [Getting started with Llama](https://llama.meta.com/get-started/#fine-tuning) guide on how to finetune using FSDP and PEFT.
286
+
287
+ ## Caveats
288
+ 1. Merging when using PEFT and FSDP is currently unsupported and will raise error.
289
+ 2. Passing `modules_to_save` config parameter to is untested at present.
290
+ 3. GPU Memory saving when using CPU Offloading is untested at present.
291
+ 4. When using FSDP+QLoRA, `paged_adamw_8bit` currently results in an error when saving a checkpoint.
292
+ 5. DoRA training with FSDP should work (albeit at lower speed than LoRA). If combined with bitsandbytes (QDoRA), 4-bit quantization should also work, but 8-bit quantization has known issues and is not recommended.
peft_md_files/conceptual_guides/adapter.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Adapters
18
+
19
+ Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources.
20
+
21
+ This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper).
22
+
23
+ ## Low-Rank Adaptation (LoRA)
24
+
25
+ <Tip>
26
+
27
+ LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness.
28
+
29
+ </Tip>
30
+
31
+ As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory.
32
+
33
+ LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency.
34
+
35
+ <div class="flex justify-center">
36
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/>
37
+ </div>
38
+
39
+ This approach has a number of advantages:
40
+
41
+ * LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters.
42
+ * The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
43
+ * LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them.
44
+ * Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models.
45
+
46
+ In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix.
47
+
48
+ <div class="flex justify-center">
49
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/>
50
+ </div>
51
+ <small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small>
52
+
53
+ ## Low-Rank Hadamard Product (LoHa)
54
+
55
+ Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT.
56
+
57
+ LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity.
58
+
59
+ ## Low-Rank Kronecker Product (LoKr)
60
+
61
+ [LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W.
62
+
63
+ ## Orthogonal Finetuning (OFT)
64
+
65
+ <div class="flex justify-center">
66
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/>
67
+ </div>
68
+ <small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small>
69
+
70
+ [OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).
71
+
72
+ OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.
73
+
74
+ ## Orthogonal Butterfly (BOFT)
75
+
76
+ [BOFT](https://hf.co/papers/2311.06243) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).
77
+
78
+ OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.
79
+
80
+ ## Adaptive Low-Rank Adaptation (AdaLoRA)
81
+
82
+ [AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning.
83
+
84
+ ## Llama-Adapter
85
+
86
+ [Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into a instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset.
87
+
88
+ A set of of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response.
89
+
90
+ <div class="flex justify-center">
91
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/>
92
+ </div>
93
+ <small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small>
94
+
95
+ To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.
peft_md_files/conceptual_guides/ia3.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # IA3
18
+
19
+ This conceptual guide gives a brief overview of [IA3](https://arxiv.org/abs/2205.05638), a parameter-efficient fine tuning technique that is
20
+ intended to improve over [LoRA](./lora).
21
+
22
+ To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations)
23
+ rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules
24
+ in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original
25
+ weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA)
26
+ keeps the number of trainable parameters much smaller.
27
+
28
+ Being similar to LoRA, IA3 carries many of the same advantages:
29
+
30
+ * IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%)
31
+ * The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them.
32
+ * Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models.
33
+ * IA3 does not add any inference latency because adapter weights can be merged with the base model.
34
+
35
+ In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
36
+ parameters. Following the authors' implementation, IA3 weights are added to the key, value and feedforward layers
37
+ of a Transformer model. To be specific, for transformer models, IA3 weights are added to the outputs of key and value layers, and to the input of the second feedforward layer
38
+ in each transformer block.
39
+
40
+ Given the target layers for injecting IA3 parameters, the number of trainable parameters
41
+ can be determined based on the size of the weight matrices.
42
+
43
+
44
+ ## Common IA3 parameters in PEFT
45
+
46
+ As with other methods supported by PEFT, to fine-tune a model using IA3, you need to:
47
+
48
+ 1. Instantiate a base model.
49
+ 2. Create a configuration (`IA3Config`) where you define IA3-specific parameters.
50
+ 3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
51
+ 4. Train the `PeftModel` as you normally would train the base model.
52
+
53
+ `IA3Config` allows you to control how IA3 is applied to the base model through the following parameters:
54
+
55
+ - `target_modules`: The modules (for example, attention blocks) to apply the IA3 vectors.
56
+ - `feedforward_modules`: The list of modules to be treated as feedforward layers in `target_modules`. While learned vectors are multiplied with
57
+ the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. Note that `feedforward_modules` must be a subset of `target_modules`.
58
+ - `modules_to_save`: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
59
+
60
+ ## Example Usage
61
+
62
+ For the task of sequence classification, one can initialize the IA3 config for a Llama model as follows:
63
+
64
+ ```py
65
+ peft_config = IA3Config(
66
+ task_type=TaskType.SEQ_CLS, target_modules=["k_proj", "v_proj", "down_proj"], feedforward_modules=["down_proj"]
67
+ )
68
+ ```
peft_md_files/conceptual_guides/oft.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Orthogonal Finetuning (OFT and BOFT)
18
+
19
+ This conceptual guide gives a brief overview of [OFT](https://arxiv.org/abs/2306.07280) and [BOFT](https://arxiv.org/abs/2311.06243), a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices.
20
+
21
+ To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn’t receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor.
22
+
23
+ Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below.
24
+
25
+ <div class="flex justify-center">
26
+ <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/BOFT_comparison.png"/>
27
+ </div>
28
+
29
+
30
+ BOFT has some advantages compared to LoRA:
31
+
32
+ * BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency.
33
+ * Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the [hyperspherical energy](https://arxiv.org/abs/1805.09298) unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge.
34
+ * BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class).
35
+ * The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization.
36
+
37
+ In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
38
+
39
+ ## Merge OFT/BOFT weights into the base model
40
+
41
+ Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.
42
+
43
+ <div class="flex justify-center">
44
+ <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/boft_merge.png"/>
45
+ </div>
46
+
47
+ This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent.
48
+
49
+ ## Utils for OFT / BOFT
50
+
51
+ ### Common OFT / BOFT parameters in PEFT
52
+
53
+ As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to:
54
+
55
+ 1. Instantiate a base model.
56
+ 2. Create a configuration (`OFTConfig` or `BOFTConfig`) where you define OFT/BOFT-specific parameters.
57
+ 3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
58
+ 4. Train the `PeftModel` as you normally would train the base model.
59
+
60
+
61
+ ### BOFT-specific paramters
62
+
63
+ `BOFTConfig` allows you to control how OFT/BOFT is applied to the base model through the following parameters:
64
+
65
+ - `boft_block_size`: the BOFT matrix block size across different layers, expressed in `int`. Smaller block size results in sparser update matrices with fewer trainable paramters. **Note**, please choose `boft_block_size` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
66
+ specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
67
+ - `boft_block_num`: the number of BOFT matrix blocks across different layers, expressed in `int`. Fewer blocks result in sparser update matrices with fewer trainable paramters. **Note**, please choose `boft_block_num` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
68
+ specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
69
+ - `boft_n_butterfly_factor`: the number of butterfly factors. **Note**, for `boft_n_butterfly_factor=1`, BOFT is the same as vanilla OFT, for `boft_n_butterfly_factor=2`, the effective block size of OFT becomes twice as big and the number of blocks become half.
70
+ - `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"boft_only"`.
71
+ - `boft_dropout`: specify the probability of multiplicative dropout.
72
+ - `target_modules`: The modules (for example, attention blocks) to inject the OFT/BOFT matrices.
73
+ - `modules_to_save`: List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
74
+
75
+
76
+
77
+ ## BOFT Example Usage
78
+
79
+ For an example of the BOFT method application to various downstream tasks, please refer to the following guides:
80
+
81
+ Take a look at the following step-by-step guides on how to finetune a model with BOFT:
82
+ - [Dreambooth finetuning with BOFT](../task_guides/boft_dreambooth)
83
+ - [Controllable generation finetuning with BOFT (ControlNet)](../task_guides/boft_controlnet)
84
+
85
+ For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows:
86
+
87
+ ```py
88
+ import transformers
89
+ from transformers import AutoModelForSeq2SeqLM, BOFTConfig
90
+ from peft import BOFTConfig, get_peft_model
91
+
92
+ config = BOFTConfig(
93
+ boft_block_size=4,
94
+ boft_n_butterfly_factor=2,
95
+ target_modules=["query", "value", "key", "output.dense", "mlp.fc1", "mlp.fc2"],
96
+ boft_dropout=0.1,
97
+ bias="boft_only",
98
+ modules_to_save=["classifier"],
99
+ )
100
+
101
+ model = transformers.Dinov2ForImageClassification.from_pretrained(
102
+ "facebook/dinov2-large",
103
+ num_labels=100,
104
+ )
105
+
106
+ boft_model = get_peft_model(model, config)
107
+ ```
peft_md_files/conceptual_guides/prompting.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Soft prompts
6
+
7
+ Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as *prompting*. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model's parameters.
8
+
9
+ There are two categories of prompting methods:
10
+
11
+ - hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt
12
+ - soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren't human readable because you aren't matching these "virtual tokens" to the embeddings of a real word
13
+
14
+ This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, P-tuning, and multitask prompt tuning.
15
+
16
+ ## Prompt tuning
17
+
18
+ <div class="flex justify-center">
19
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prompt-tuning.png"/>
20
+ </div>
21
+ <small>Only train and store a significantly smaller set of task-specific prompt parameters <a href="https://hf.co/papers/2104.08691">(image source)</a>.</small>
22
+
23
+ [Prompt tuning](https://hf.co/papers/2104.08691) was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are *generated*. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters.
24
+
25
+ The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model's parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases.
26
+
27
+ Take a look at [Prompt tuning for causal language modeling](../task_guides/clm-prompt-tuning) for a step-by-step guide on how to train a model with prompt tuning.
28
+
29
+ ## Prefix tuning
30
+
31
+ <div class="flex justify-center">
32
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prefix-tuning.png"/>
33
+ </div>
34
+ <small>Optimize the prefix parameters for each task <a href="https://hf.co/papers/2101.00190">(image source)</a>.</small>
35
+
36
+ [Prefix tuning](https://hf.co/papers/2101.00190) was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model's parameters frozen.
37
+
38
+ The main difference is that the prefix parameters are inserted in **all** of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts.
39
+
40
+ As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings.
41
+
42
+ Take a look at [Prefix tuning for conditional generation](../task_guides/seq2seq-prefix-tuning) for a step-by-step guide on how to train a model with prefix tuning.
43
+
44
+ ## P-tuning
45
+
46
+ <div class="flex justify-center">
47
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/p-tuning.png"/>
48
+ </div>
49
+ <small>Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder <a href="https://hf.co/papers/2103.10385">(image source)</a>.</small>
50
+
51
+ [P-tuning](https://hf.co/papers/2103.10385) is designed for natural language understanding (NLU) tasks and all language models.
52
+ It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though:
53
+
54
+ - the prompt tokens can be inserted anywhere in the input sequence, and it isn't restricted to only the beginning
55
+ - the prompt tokens are only added to the input instead of adding them to every layer of the model
56
+ - introducing *anchor* tokens can improve performance because they indicate characteristics of a component in the input sequence
57
+
58
+ The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks.
59
+
60
+ Take a look at [P-tuning for sequence classification](../task_guides/ptuning-seq-classification) for a step-by-step guide on how to train a model with P-tuning.
61
+
62
+ ## Multitask prompt tuning
63
+
64
+ <div class="flex justify-center">
65
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt.png"/>
66
+ </div>
67
+ <small><a href="https://hf.co/papers/2303.02861">Multitask prompt tuning enables parameter-efficient transfer learning</a>.</small>
68
+
69
+ [Multitask prompt tuning (MPT)](https://hf.co/papers/2303.02861) learns a single prompt from data for multiple task types that can be shared for different target tasks. Other existing approaches learn a separate soft prompt for each task that need to be retrieved or aggregated for adaptation to target tasks. MPT consists of two stages:
70
+
71
+ 1. source training - for each task, its soft prompt is decomposed into task-specific vectors. The task-specific vectors are multiplied together to form another matrix W, and the Hadamard product is used between W and a shared prompt matrix P to generate a task-specific prompt matrix. The task-specific prompts are distilled into a single prompt matrix that is shared across all tasks. This prompt is trained with multitask training.
72
+ 2. target adaptation - to adapt the single prompt for a target task, a target prompt is initialized and expressed as the Hadamard product of the shared prompt matrix and the task-specific low-rank prompt matrix.
73
+
74
+ <div class="flex justify-center">
75
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt-decomposition.png"/>
76
+ </div>
77
+ <small><a href="https://hf.co/papers/2103.10385">Prompt decomposition</a>.</small>
peft_md_files/developer_guides/checkpoint.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # PEFT checkpoint format
18
+
19
+ This document describes how PEFT's checkpoint files are structured and how to convert between the PEFT format and other formats.
20
+
21
+ ## PEFT files
22
+
23
+ PEFT (parameter-efficient fine-tuning) methods only update a small subset of a model's parameters rather than all of them. This is nice because checkpoint files can generally be much smaller than the original model files and are easier to store and share. However, this also means that to load a PEFT model, you need to have the original model available as well.
24
+
25
+ When you call [`~PeftModel.save_pretrained`] on a PEFT model, the PEFT model saves three files, described below:
26
+
27
+ 1. `adapter_model.safetensors` or `adapter_model.bin`
28
+
29
+ By default, the model is saved in the `safetensors` format, a secure alternative to the `bin` format, which is known to be susceptible to [security vulnerabilities](https://huggingface.co/docs/hub/security-pickle) because it uses the pickle utility under the hood. Both formats store the same `state_dict` though, and are interchangeable.
30
+
31
+ The `state_dict` only contains the parameters of the adapter module, not the base model. To illustrate the difference in size, a normal BERT model requires ~420MB of disk space, whereas an IA³ adapter on top of this BERT model only requires ~260KB.
32
+
33
+ 2. `adapter_config.json`
34
+
35
+ The `adapter_config.json` file contains the configuration of the adapter module, which is necessary to load the model. Below is an example of an `adapter_config.json` for an IA³ adapter with standard settings applied to a BERT model:
36
+
37
+ ```json
38
+ {
39
+ "auto_mapping": {
40
+ "base_model_class": "BertModel",
41
+ "parent_library": "transformers.models.bert.modeling_bert"
42
+ },
43
+ "base_model_name_or_path": "bert-base-uncased",
44
+ "fan_in_fan_out": false,
45
+ "feedforward_modules": [
46
+ "output.dense"
47
+ ],
48
+ "inference_mode": true,
49
+ "init_ia3_weights": true,
50
+ "modules_to_save": null,
51
+ "peft_type": "IA3",
52
+ "revision": null,
53
+ "target_modules": [
54
+ "key",
55
+ "value",
56
+ "output.dense"
57
+ ],
58
+ "task_type": null
59
+ }
60
+ ```
61
+
62
+ The configuration file contains:
63
+
64
+ - the adapter module type stored, `"peft_type": "IA3"`
65
+ - information about the base model like `"base_model_name_or_path": "bert-base-uncased"`
66
+ - the revision of the model (if any), `"revision": null`
67
+
68
+ If the base model is not a pretrained Transformers model, the latter two entries will be `null`. Other than that, the settings are all related to the specific IA³ adapter that was used to fine-tune the model.
69
+
70
+ 3. `README.md`
71
+
72
+ The generated `README.md` is the model card of a PEFT model and contains a few pre-filled entries. The intent of this is to make it easier to share the model with others and to provide some basic information about the model. This file is not needed to load the model.
73
+
74
+ ## Convert to PEFT format
75
+
76
+ When converting from another format to the PEFT format, we require both the `adapter_model.safetensors` (or `adapter_model.bin`) file and the `adapter_config.json` file.
77
+
78
+ ### adapter_model
79
+
80
+ For the model weights, it is important to use the correct mapping from parameter name to value for PEFT to load the file. Getting this mapping right is an exercise in checking the implementation details, as there is no generally agreed upon format for PEFT adapters.
81
+
82
+ Fortunately, figuring out this mapping is not overly complicated for common base cases. Let's look at a concrete example, the [`LoraLayer`](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py):
83
+
84
+ ```python
85
+ # showing only part of the code
86
+
87
+ class LoraLayer(BaseTunerLayer):
88
+ # All names of layers that may contain (trainable) adapter weights
89
+ adapter_layer_names = ("lora_A", "lora_B", "lora_embedding_A", "lora_embedding_B")
90
+ # All names of other parameters that may contain adapter-related parameters
91
+ other_param_names = ("r", "lora_alpha", "scaling", "lora_dropout")
92
+
93
+ def __init__(self, base_layer: nn.Module, **kwargs) -> None:
94
+ self.base_layer = base_layer
95
+ self.r = {}
96
+ self.lora_alpha = {}
97
+ self.scaling = {}
98
+ self.lora_dropout = nn.ModuleDict({})
99
+ self.lora_A = nn.ModuleDict({})
100
+ self.lora_B = nn.ModuleDict({})
101
+ # For Embedding layer
102
+ self.lora_embedding_A = nn.ParameterDict({})
103
+ self.lora_embedding_B = nn.ParameterDict({})
104
+ # Mark the weight as unmerged
105
+ self._disable_adapters = False
106
+ self.merged_adapters = []
107
+ self.use_dora: dict[str, bool] = {}
108
+ self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
109
+ self._caches: dict[str, Any] = {}
110
+ self.kwargs = kwargs
111
+ ```
112
+
113
+ In the `__init__` code used by all `LoraLayer` classes in PEFT, there are a bunch of parameters used to initialize the model, but only a few are relevant for the checkpoint file: `lora_A`, `lora_B`, `lora_embedding_A`, and `lora_embedding_B`. These parameters are listed in the class attribute `adapter_layer_names` and contain the learnable parameters, so they must be included in the checkpoint file. All the other parameters, like the rank `r`, are derived from the `adapter_config.json` and must be included there (unless the default value is used).
114
+
115
+ Let's check the `state_dict` of a PEFT LoRA model applied to BERT. When printing the first five keys using the default LoRA settings (the remaining keys are the same, just with different layer numbers), we get:
116
+
117
+ - `base_model.model.encoder.layer.0.attention.self.query.lora_A.weight`
118
+ - `base_model.model.encoder.layer.0.attention.self.query.lora_B.weight`
119
+ - `base_model.model.encoder.layer.0.attention.self.value.lora_A.weight`
120
+ - `base_model.model.encoder.layer.0.attention.self.value.lora_B.weight`
121
+ - `base_model.model.encoder.layer.1.attention.self.query.lora_A.weight`
122
+ - etc.
123
+
124
+ Let's break this down:
125
+
126
+ - By default, for BERT models, LoRA is applied to the `query` and `value` layers of the attention module. This is why you see `attention.self.query` and `attention.self.value` in the key names for each layer.
127
+ - LoRA decomposes the weights into two low-rank matrices, `lora_A` and `lora_B`. This is where `lora_A` and `lora_B` come from in the key names.
128
+ - These LoRA matrices are implemented as `nn.Linear` layers, so the parameters are stored in the `.weight` attribute (`lora_A.weight`, `lora_B.weight`).
129
+ - By default, LoRA isn't applied to BERT's embedding layer, so there are _no entries_ for `lora_A_embedding` and `lora_B_embedding`.
130
+ - The keys of the `state_dict` always start with `"base_model.model."`. The reason is that, in PEFT, we wrap the base model inside a tuner-specific model (`LoraModel` in this case), which itself is wrapped in a general PEFT model (`PeftModel`). For this reason, these two prefixes are added to the keys. When converting to the PEFT format, it is required to add these prefixes.
131
+
132
+ <Tip>
133
+
134
+ This last point is not true for prefix tuning techniques like prompt tuning. There, the extra embeddings are directly stored in the `state_dict` without any prefixes added to the keys.
135
+
136
+ </Tip>
137
+
138
+ When inspecting the parameter names in the loaded model, you might be surprised to find that they look a bit different, e.g. `base_model.model.encoder.layer.0.attention.self.query.lora_A.default.weight`. The difference is the *`.default`* part in the second to last segment. This part exists because PEFT generally allows the addition of multiple adapters at once (using an `nn.ModuleDict` or `nn.ParameterDict` to store them). For example, if you add another adapter called "other", the key for that adapter would be `base_model.model.encoder.layer.0.attention.self.query.lora_A.other.weight`.
139
+
140
+ When you call [`~PeftModel.save_pretrained`], the adapter name is stripped from the keys. The reason is that the adapter name is not an important part of the model architecture; it is just an arbitrary name. When loading the adapter, you could choose a totally different name, and the model would still work the same way. This is why the adapter name is not stored in the checkpoint file.
141
+
142
+ <Tip>
143
+
144
+ If you call `save_pretrained("some/path")` and the adapter name is not `"default"`, the adapter is stored in a sub-directory with the same name as the adapter. So if the name is "other", it would be stored inside of `some/path/other`.
145
+
146
+ </Tip>
147
+
148
+ In some circumstances, deciding which values to add to the checkpoint file can become a bit more complicated. For example, in PEFT, DoRA is implemented as a special case of LoRA. If you want to convert a DoRA model to PEFT, you should create a LoRA checkpoint with extra entries for DoRA. You can see this in the `__init__` of the previous `LoraLayer` code:
149
+
150
+ ```python
151
+ self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
152
+ ```
153
+
154
+ This indicates that there is an optional extra parameter per layer for DoRA.
155
+
156
+ ### adapter_config
157
+
158
+ All the other information needed to load a PEFT model is contained in the `adapter_config.json` file. Let's check this file for a LoRA model applied to BERT:
159
+
160
+ ```json
161
+ {
162
+ "alpha_pattern": {},
163
+ "auto_mapping": {
164
+ "base_model_class": "BertModel",
165
+ "parent_library": "transformers.models.bert.modeling_bert"
166
+ },
167
+ "base_model_name_or_path": "bert-base-uncased",
168
+ "bias": "none",
169
+ "fan_in_fan_out": false,
170
+ "inference_mode": true,
171
+ "init_lora_weights": true,
172
+ "layer_replication": null,
173
+ "layers_pattern": null,
174
+ "layers_to_transform": null,
175
+ "loftq_config": {},
176
+ "lora_alpha": 8,
177
+ "lora_dropout": 0.0,
178
+ "megatron_config": null,
179
+ "megatron_core": "megatron.core",
180
+ "modules_to_save": null,
181
+ "peft_type": "LORA",
182
+ "r": 8,
183
+ "rank_pattern": {},
184
+ "revision": null,
185
+ "target_modules": [
186
+ "query",
187
+ "value"
188
+ ],
189
+ "task_type": null,
190
+ "use_dora": false,
191
+ "use_rslora": false
192
+ }
193
+ ```
194
+
195
+ This contains a lot of entries, and at first glance, it could feel overwhelming to figure out all the right values to put in there. However, most of the entries are not necessary to load the model. This is either because they use the default values and don't need to be added or because they only affect the initialization of the LoRA weights, which is irrelevant when it comes to loading the model. If you find that you don't know what a specific parameter does, e.g., `"use_rslora",` don't add it, and you should be fine. Also note that as more options are added, this file will get more entries in the future, but it should be backward compatible.
196
+
197
+ At the minimum, you should include the following entries:
198
+
199
+ ```json
200
+ {
201
+ "target_modules": ["query", "value"],
202
+ "peft_type": "LORA"
203
+ }
204
+ ```
205
+
206
+ However, adding as many entries as possible, like the rank `r` or the `base_model_name_or_path` (if it's a Transformers model) is recommended. This information can help others understand the model better and share it more easily. To check which keys and values are expected, check out the [config.py](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/config.py) file (as an example, this is the config file for LoRA) in the PEFT source code.
207
+
208
+ ## Model storage
209
+
210
+ In some circumstances, you might want to store the whole PEFT model, including the base weights. This can be necessary if, for instance, the base model is not available to the users trying to load the PEFT model. You can merge the weights first or convert it into a Transformer model.
211
+
212
+ ### Merge the weights
213
+
214
+ The most straightforward way to store the whole PEFT model is to merge the adapter weights into the base weights:
215
+
216
+ ```python
217
+ merged_model = model.merge_and_unload()
218
+ merged_model.save_pretrained(...)
219
+ ```
220
+
221
+ There are some disadvantages to this approach, though:
222
+
223
+ - Once [`~LoraModel.merge_and_unload`] is called, you get a basic model without any PEFT-specific functionality. This means you can't use any of the PEFT-specific methods anymore.
224
+ - You cannot unmerge the weights, load multiple adapters at once, disable the adapter, etc.
225
+ - Not all PEFT methods support merging weights.
226
+ - Some PEFT methods may generally allow merging, but not with specific settings (e.g. when using certain quantization techniques).
227
+ - The whole model will be much larger than the PEFT model, as it will contain all the base weights as well.
228
+
229
+ But inference with a merged model should be a bit faster.
230
+
231
+ ### Convert to a Transformers model
232
+
233
+ Another way to save the whole model, assuming the base model is a Transformers model, is to use this hacky approach to directly insert the PEFT weights into the base model and save it, which only works if you "trick" Transformers into believing the PEFT model is not a PEFT model. This only works with LoRA because other adapters are not implemented in Transformers.
234
+
235
+ ```python
236
+ model = ... # the PEFT model
237
+ ...
238
+ # after you finish training the model, save it in a temporary location
239
+ model.save_pretrained(<temp_location>)
240
+ # now load this model directly into a transformers model, without the PEFT wrapper
241
+ # the PEFT weights are directly injected into the base model
242
+ model_loaded = AutoModel.from_pretrained(<temp_location>)
243
+ # now make the loaded model believe that it is _not_ a PEFT model
244
+ model_loaded._hf_peft_config_loaded = False
245
+ # now when we save it, it will save the whole model
246
+ model_loaded.save_pretrained(<final_location>)
247
+ # or upload to Hugging Face Hub
248
+ model_loaded.push_to_hub(<final_location>)
249
+ ```
250
+
peft_md_files/developer_guides/contributing.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Contribute to PEFT
18
+
19
+ We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible.
20
+
21
+ ## Installation
22
+
23
+ For code contributions to PEFT, you should choose the ["source"](../install#source) installation method.
24
+
25
+ If you are new to creating a pull request, follow the [Creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) guide by GitHub.
26
+
27
+ ## Tests and code quality checks
28
+
29
+ Regardless of the contribution type (unless it’s only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesn’t break anything and follows the project standards.
30
+
31
+ We provide a Makefile to execute the necessary tests. Run the code below for the unit test:
32
+
33
+ ```sh
34
+ make test
35
+ ```
36
+
37
+ Run one of the following to either only check or check and fix code quality and style:
38
+
39
+ ```sh
40
+ make quality # just check
41
+ make style # check and fix
42
+ ```
43
+
44
+ You can also set up [`pre-commit`](https://pre-commit.com/) to run these fixes
45
+ automatically as Git commit hooks.
46
+
47
+ ```bash
48
+ $ pip install pre-commit
49
+ $ pre-commit install
50
+ ```
51
+
52
+ Running all the tests can take a couple of minutes, so during development it can be more efficient to only run tests specific to your change:
53
+
54
+ ```sh
55
+ pytest tests/ -k <name-of-test>
56
+ ```
57
+
58
+ This should finish much quicker and allow for faster iteration. However, you should still run the whole test suite before creating a PR because your change can inadvertently break tests that at first glance are unrelated.
59
+
60
+ If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at [tests/test_gpu_examples.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_gpu_examples.py) and [tests/test_common_gpu.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_common_gpu.py) to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the `--regression` flag to trigger regression tests.
61
+
62
+ It can happen that while you’re working on your PR, the underlying code base changes due to other changes being merged. If that happens – especially when there is a merge conflict – please update your branch with the latest changes. This can be a merge or a rebase, and we'll squash and merge the PR once it’s ready.
63
+
64
+ ## PR description
65
+
66
+ When opening a PR, please provide a nice description of the change you're proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project.
67
+
68
+ If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didn’t work, it’s a good indication that a code comment is needed.
69
+
70
+ ## Bugfixes
71
+
72
+ Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., “Resolves #12345”).
73
+
74
+ Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future.
75
+
76
+ ## Add a new fine-tuning method
77
+
78
+ New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps.
79
+
80
+ 1. Before you start to implement the new method, please open a GitHub issue with your proposal. This way, the maintainers can give you some early feedback.
81
+ 2. Please add a link to the source (usually a paper) of the method. Some evidence should be provided there is general interest in using the method. We will not add new methods that are freshly published, but there is no evidence of demand for it.
82
+ 3. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but don’t overdo it).
83
+ 4. Ideally, in addition to the implementation of the new method, there should also be examples (notebooks, scripts), documentation, and an extensive test suite that proves the method works with a variety of tasks. However, this can be more challenging so it is acceptable to only provide the implementation and at least one working example. Documentation and tests can be added in follow up PRs.
84
+ 5. Once you have something that seems to be working, don’t hesitate to create a draft PR even if it’s not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way.
85
+
86
+ ## Add other features
87
+
88
+ It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it.
89
+
90
+ New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature.
91
+
92
+ Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged.
peft_md_files/developer_guides/custom_models.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Custom models
18
+
19
+ Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is
20
+ assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like
21
+ [LoRA](../conceptual_guides/lora) - are not restricted to specific model types.
22
+
23
+ In this guide, we will see how LoRA can be applied to a multilayer perceptron, a computer vision model from the [timm](https://huggingface.co/docs/timm/index) library, or a new 🤗 Transformers architecture.
24
+
25
+ ## Multilayer perceptron
26
+
27
+ Let's assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition:
28
+
29
+ ```python
30
+ from torch import nn
31
+
32
+
33
+ class MLP(nn.Module):
34
+ def __init__(self, num_units_hidden=2000):
35
+ super().__init__()
36
+ self.seq = nn.Sequential(
37
+ nn.Linear(20, num_units_hidden),
38
+ nn.ReLU(),
39
+ nn.Linear(num_units_hidden, num_units_hidden),
40
+ nn.ReLU(),
41
+ nn.Linear(num_units_hidden, 2),
42
+ nn.LogSoftmax(dim=-1),
43
+ )
44
+
45
+ def forward(self, X):
46
+ return self.seq(X)
47
+ ```
48
+
49
+ This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer.
50
+
51
+ <Tip>
52
+
53
+ For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains
54
+ from PEFT, but those gains are in line with more realistic examples.
55
+
56
+ </Tip>
57
+
58
+ There are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers
59
+ models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers.
60
+ To determine the names of the layers to tune:
61
+
62
+ ```python
63
+ print([(n, type(m)) for n, m in MLP().named_modules()])
64
+ ```
65
+
66
+ This should print:
67
+
68
+ ```
69
+ [('', __main__.MLP),
70
+ ('seq', torch.nn.modules.container.Sequential),
71
+ ('seq.0', torch.nn.modules.linear.Linear),
72
+ ('seq.1', torch.nn.modules.activation.ReLU),
73
+ ('seq.2', torch.nn.modules.linear.Linear),
74
+ ('seq.3', torch.nn.modules.activation.ReLU),
75
+ ('seq.4', torch.nn.modules.linear.Linear),
76
+ ('seq.5', torch.nn.modules.activation.LogSoftmax)]
77
+ ```
78
+
79
+ Let's say we want to apply LoRA to the input layer and to the hidden layer, those are `'seq.0'` and `'seq.2'`. Moreover,
80
+ let's assume we want to update the output layer without LoRA, that would be `'seq.4'`. The corresponding config would
81
+ be:
82
+
83
+ ```python
84
+ from peft import LoraConfig
85
+
86
+ config = LoraConfig(
87
+ target_modules=["seq.0", "seq.2"],
88
+ modules_to_save=["seq.4"],
89
+ )
90
+ ```
91
+
92
+ With that, we can create our PEFT model and check the fraction of parameters trained:
93
+
94
+ ```python
95
+ from peft import get_peft_model
96
+
97
+ model = MLP()
98
+ peft_model = get_peft_model(model, config)
99
+ peft_model.print_trainable_parameters()
100
+ # prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922
101
+ ```
102
+
103
+ Finally, we can use any training framework we like, or write our own fit loop, to train the `peft_model`.
104
+
105
+ For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/multilayer_perceptron/multilayer_perceptron_lora.ipynb).
106
+
107
+ ## timm models
108
+
109
+ The [timm](https://huggingface.co/docs/timm/index) library contains a large number of pretrained computer vision models.
110
+ Those can also be fine-tuned with PEFT. Let's check out how this works in practice.
111
+
112
+ To start, ensure that timm is installed in the Python environment:
113
+
114
+ ```bash
115
+ python -m pip install -U timm
116
+ ```
117
+
118
+ Next we load a timm model for an image classification task:
119
+
120
+ ```python
121
+ import timm
122
+
123
+ num_classes = ...
124
+ model_id = "timm/poolformer_m36.sail_in1k"
125
+ model = timm.create_model(model_id, pretrained=True, num_classes=num_classes)
126
+ ```
127
+
128
+ Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since
129
+ those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of
130
+ those layers, let's look at all the layer names:
131
+
132
+ ```python
133
+ print([(n, type(m)) for n, m in model.named_modules()])
134
+ ```
135
+
136
+ This will print a very long list, we'll only show the first few:
137
+
138
+ ```
139
+ [('', timm.models.metaformer.MetaFormer),
140
+ ('stem', timm.models.metaformer.Stem),
141
+ ('stem.conv', torch.nn.modules.conv.Conv2d),
142
+ ('stem.norm', torch.nn.modules.linear.Identity),
143
+ ('stages', torch.nn.modules.container.Sequential),
144
+ ('stages.0', timm.models.metaformer.MetaFormerStage),
145
+ ('stages.0.downsample', torch.nn.modules.linear.Identity),
146
+ ('stages.0.blocks', torch.nn.modules.container.Sequential),
147
+ ('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock),
148
+ ('stages.0.blocks.0.norm1', timm.layers.norm.GroupNorm1),
149
+ ('stages.0.blocks.0.token_mixer', timm.models.metaformer.Pooling),
150
+ ('stages.0.blocks.0.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
151
+ ('stages.0.blocks.0.drop_path1', torch.nn.modules.linear.Identity),
152
+ ('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale),
153
+ ('stages.0.blocks.0.res_scale1', torch.nn.modules.linear.Identity),
154
+ ('stages.0.blocks.0.norm2', timm.layers.norm.GroupNorm1),
155
+ ('stages.0.blocks.0.mlp', timm.layers.mlp.Mlp),
156
+ ('stages.0.blocks.0.mlp.fc1', torch.nn.modules.conv.Conv2d),
157
+ ('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU),
158
+ ('stages.0.blocks.0.mlp.drop1', torch.nn.modules.dropout.Dropout),
159
+ ('stages.0.blocks.0.mlp.norm', torch.nn.modules.linear.Identity),
160
+ ('stages.0.blocks.0.mlp.fc2', torch.nn.modules.conv.Conv2d),
161
+ ('stages.0.blocks.0.mlp.drop2', torch.nn.modules.dropout.Dropout),
162
+ ('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity),
163
+ ('stages.0.blocks.0.layer_scale2', timm.models.metaformer.Scale),
164
+ ('stages.0.blocks.0.res_scale2', torch.nn.modules.linear.Identity),
165
+ ('stages.0.blocks.1', timm.models.metaformer.MetaFormerBlock),
166
+ ('stages.0.blocks.1.norm1', timm.layers.norm.GroupNorm1),
167
+ ('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling),
168
+ ('stages.0.blocks.1.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
169
+ ...
170
+ ('head.global_pool.flatten', torch.nn.modules.linear.Identity),
171
+ ('head.norm', timm.layers.norm.LayerNorm2d),
172
+ ('head.flatten', torch.nn.modules.flatten.Flatten),
173
+ ('head.drop', torch.nn.modules.linear.Identity),
174
+ ('head.fc', torch.nn.modules.linear.Linear)]
175
+ ]
176
+ ```
177
+
178
+ Upon closer inspection, we see that the 2D conv layers have names such as `"stages.0.blocks.0.mlp.fc1"` and
179
+ `"stages.0.blocks.0.mlp.fc2"`. How can we match those layer names specifically? You can write a [regular
180
+ expressions](https://docs.python.org/3/library/re.html) to match the layer names. For our case, the regex
181
+ `r".*\.mlp\.fc\d"` should do the job.
182
+
183
+ Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is
184
+ also updated. Looking at the end of the list printed above, we can see that it's named `'head.fc'`. With that in mind,
185
+ here is our LoRA config:
186
+
187
+ ```python
188
+ config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"])
189
+ ```
190
+
191
+ Then we only need to create the PEFT model by passing our base model and the config to `get_peft_model`:
192
+
193
+ ```python
194
+ peft_model = get_peft_model(model, config)
195
+ peft_model.print_trainable_parameters()
196
+ # prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876
197
+ ```
198
+
199
+ This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain.
200
+
201
+ For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/image_classification/image_classification_timm_peft_lora.ipynb).
202
+
203
+ ## New transformers architectures
204
+
205
+ When new popular transformers architectures are released, we do our best to quickly add them to PEFT. If you come across a transformers model that is not supported out of the box, don't worry, it will most likely still work if the config is set correctly. Specifically, you have to identify the layers that should be adapted and set them correctly when initializing the corresponding config class, e.g. `LoraConfig`. Here are some tips to help with this.
206
+
207
+ As a first step, it is a good idea is to check the existing models for inspiration. You can find them inside of [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) in the PEFT repository. Often, you'll find a similar architecture that uses the same names. For example, if the new model architecture is a variation of the "mistral" model and you want to apply LoRA, you can see that the entry for "mistral" in `TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING` contains `["q_proj", "v_proj"]`. This tells you that for "mistral" models, the `target_modules` for LoRA should be `["q_proj", "v_proj"]`:
208
+
209
+ ```python
210
+ from peft import LoraConfig, get_peft_model
211
+
212
+ my_mistral_model = ...
213
+ config = LoraConfig(
214
+ target_modules=["q_proj", "v_proj"],
215
+ ..., # other LoRA arguments
216
+ )
217
+ peft_model = get_peft_model(my_mistral_model, config)
218
+ ```
219
+
220
+ If that doesn't help, check the existing modules in your model architecture with the `named_modules` method and try to identify the attention layers, especially the key, query, and value layers. Those will often have names such as `c_attn`, `query`, `q_proj`, etc. The key layer is not always adapted, and ideally, you should check whether including it results in better performance.
221
+
222
+ Additionally, linear layers are common targets to be adapted (e.g. in [QLoRA paper](https://arxiv.org/abs/2305.14314), authors suggest to adapt them as well). Their names will often contain the strings `fc` or `dense`.
223
+
224
+ If you want to add a new model to PEFT, please create an entry in [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) and open a pull request on the [repository](https://github.com/huggingface/peft/pulls). Don't forget to update the [README](https://github.com/huggingface/peft#models-support-matrix) as well.
225
+
226
+ ## Verify parameters and layers
227
+
228
+ You can verify whether you've correctly applied a PEFT method to your model in a few ways.
229
+
230
+ * Check the fraction of parameters that are trainable with the [`~PeftModel.print_trainable_parameters`] method. If this number is lower or higher than expected, check the model `repr` by printing the model. This shows the names of all the layer types in the model. Ensure that only the intended target layers are replaced by the adapter layers. For example, if LoRA is applied to `nn.Linear` layers, then you should only see `lora.Linear` layers being used.
231
+
232
+ ```py
233
+ peft_model.print_trainable_parameters()
234
+ ```
235
+
236
+ * Another way you can view the adapted layers is to use the `targeted_module_names` attribute to list the name of each module that was adapted.
237
+
238
+ ```python
239
+ print(peft_model.targeted_module_names)
240
+ ```
241
+
242
+ ## Unsupported module types
243
+
244
+ Methods like LoRA only work if the target modules are supported by PEFT. For example, it's possible to apply LoRA to `nn.Linear` and `nn.Conv2d` layers, but not, for instance, to `nn.LSTM`. If you find a layer class you want to apply PEFT to is not supported, you can:
245
+
246
+ - define a custom mapping to dynamically dispatch custom modules in LoRA
247
+ - open an [issue](https://github.com/huggingface/peft/issues) and request the feature where maintainers will implement it or guide you on how to implement it yourself if demand for this module type is sufficiently high
248
+
249
+ ### Experimental support for dynamic dispatch of custom modules in LoRA
250
+
251
+ > [!WARNING]
252
+ > This feature is experimental and subject to change, depending on its reception by the community. We will introduce a public and stable API if there is significant demand for it.
253
+
254
+ PEFT supports an experimental API for custom module types for LoRA. Let's assume you have a LoRA implementation for LSTMs. Normally, you would not be able to tell PEFT to use it, even if it would theoretically work with PEFT. However, this is possible with dynamic dispatch of custom layers.
255
+
256
+ The experimental API currently looks like this:
257
+
258
+ ```python
259
+ class MyLoraLSTMLayer:
260
+ ...
261
+
262
+ base_model = ... # load the base model that uses LSTMs
263
+
264
+ # add the LSTM layer names to target_modules
265
+ config = LoraConfig(..., target_modules=["lstm"])
266
+ # define a mapping from base layer type to LoRA layer type
267
+ custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
268
+ # register the new mapping
269
+ config._register_custom_module(custom_module_mapping)
270
+ # after registration, create the PEFT model
271
+ peft_model = get_peft_model(base_model, config)
272
+ # do training
273
+ ```
274
+
275
+ <Tip>
276
+
277
+ When you call [`get_peft_model`], you will see a warning because PEFT does not recognize the targeted module type. In this case, you can ignore this warning.
278
+
279
+ </Tip>
280
+
281
+ By supplying a custom mapping, PEFT first checks the base model's layers against the custom mapping and dispatches to the custom LoRA layer type if there is a match. If there is no match, PEFT checks the built-in LoRA layer types for a match.
282
+
283
+ Therefore, this feature can also be used to override existing dispatch logic, e.g. if you want to use your own LoRA layer for `nn.Linear` instead of using the one provided by PEFT.
284
+
285
+ When creating your custom LoRA module, please follow the same rules as the [existing LoRA modules](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py). Some important constraints to consider:
286
+
287
+ - The custom module should inherit from `nn.Module` and `peft.tuners.lora.layer.LoraLayer`.
288
+ - The `__init__` method of the custom module should have the positional arguments `base_layer` and `adapter_name`. After this, there are additional `**kwargs` that you are free to use or ignore.
289
+ - The learnable parameters should be stored in an `nn.ModuleDict` or `nn.ParameterDict`, where the key corresponds to the name of the specific adapter (remember that a model can have more than one adapter at a time).
290
+ - The name of these learnable parameter attributes should start with `"lora_"`, e.g. `self.lora_new_param = ...`.
291
+ - Some methods are optional, e.g. you only need to implement `merge` and `unmerge` if you want to support weight merging.
292
+
293
+ Currently, the information about the custom module does not persist when you save the model. When loading the model, you have to register the custom modules again.
294
+
295
+ ```python
296
+ # saving works as always and includes the parameters of the custom modules
297
+ peft_model.save_pretrained(<model-path>)
298
+
299
+ # loading the model later:
300
+ base_model = ...
301
+ # load the LoRA config that you saved earlier
302
+ config = LoraConfig.from_pretrained(<model-path>)
303
+ # register the custom module again, the same way as the first time
304
+ custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
305
+ config._register_custom_module(custom_module_mapping)
306
+ # pass the config instance to from_pretrained:
307
+ peft_model = PeftModel.from_pretrained(model, tmp_path / "lora-custom-module", config=config)
308
+ ```
309
+
310
+ If you use this feature and find it useful, or if you encounter problems, let us know by creating an issue or a discussion on GitHub. This allows us to estimate the demand for this feature and add a public API if it is sufficiently high.
peft_md_files/developer_guides/lora.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LoRA
18
+
19
+ LoRA is low-rank decomposition method to reduce the number of trainable parameters which speeds up finetuning large models and uses less memory. In PEFT, using LoRA is as easy as setting up a [`LoraConfig`] and wrapping it with [`get_peft_model`] to create a trainable [`PeftModel`].
20
+
21
+ This guide explores in more detail other options and features for using LoRA.
22
+
23
+ ## Initialization
24
+
25
+ The initialization of LoRA weights is controlled by the parameter `init_lora_weights` in [`LoraConfig`]. By default, PEFT initializes LoRA weights with Kaiming-uniform for weight A and zeros for weight B resulting in an identity transform (same as the reference [implementation](https://github.com/microsoft/LoRA)).
26
+
27
+ It is also possible to pass `init_lora_weights="gaussian"`. As the name suggests, this initializes weight A with a Gaussian distribution and zeros for weight B (this is how [Diffusers](https://huggingface.co/docs/diffusers/index) initializes LoRA weights).
28
+
29
+ ```py
30
+ from peft import LoraConfig
31
+
32
+ config = LoraConfig(init_lora_weights="gaussian", ...)
33
+ ```
34
+
35
+ There is also an option to set `init_lora_weights=False` which is useful for debugging and testing. This should be the only time you use this option. When choosing this option, the LoRA weights are initialized such that they do *not* result in an identity transform.
36
+
37
+ ```py
38
+ from peft import LoraConfig
39
+
40
+ config = LoraConfig(init_lora_weights=False, ...)
41
+ ```
42
+
43
+ ### PiSSA
44
+ [PiSSA](https://arxiv.org/abs/2404.02948) initializes the LoRA adapter using the principal singular values and singular vectors. This straightforward modification allows PiSSA to converge more rapidly than LoRA and ultimately attain superior performance. Moreover, PiSSA reduces the quantization error compared to QLoRA, leading to further enhancements.
45
+
46
+ Configure the initialization method to "pissa", which may take several minutes to execute SVD on the pre-trained model:
47
+ ```python
48
+ from peft import LoraConfig
49
+ config = LoraConfig(init_lora_weights="pissa", ...)
50
+ ```
51
+ Alternatively, execute fast SVD, which takes only a few seconds. The number of iterations determines the trade-off between the error and computation time:
52
+ ```python
53
+ lora_config = LoraConfig(init_lora_weights="pissa_niter_[number of iters]", ...)
54
+ ```
55
+ For detailed instruction on using PiSSA, please follow [these instructions](https://github.com/fxmeng/peft/tree/main/examples/pissa_finetuning).
56
+
57
+ ### OLoRA
58
+ [OLoRA](https://arxiv.org/abs/2406.01775) utilizes QR decomposition to initialize the LoRA adapters. OLoRA translates the base weights of the model by a factor of their QR decompositions, i.e., it mutates the weights before performing any training on them. This approach significantly improves stability, accelerates convergence speed, and ultimately achieves superior performance.
59
+
60
+ You just need to pass a single additional option to use OLoRA:
61
+ ```python
62
+ from peft import LoraConfig
63
+ config = LoraConfig(init_lora_weights="olora", ...)
64
+ ```
65
+ For more advanced usage, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/olora_finetuning).
66
+ ### LoftQ
67
+
68
+ #### Standard approach
69
+
70
+ When quantizing the base model for QLoRA training, consider using the [LoftQ initialization](https://arxiv.org/abs/2310.08659), which has been shown to improve performance when training quantized models. The idea is that the LoRA weights are initialized such that the quantization error is minimized. To use LoftQ, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
71
+
72
+ In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
73
+
74
+ #### A more convienient way
75
+
76
+ An easier but more limited way to apply LoftQ initialization is to use the convenience function `replace_lora_weights_loftq`. This takes the quantized PEFT model as input and replaces the LoRA weights in-place with their LoftQ-initialized counterparts.
77
+
78
+ ```python
79
+ from peft import replace_lora_weights_loftq
80
+ from transformers import BitsAndBytesConfig
81
+
82
+ bnb_config = BitsAndBytesConfig(load_in_4bit=True, ...)
83
+ base_model = AutoModelForCausalLM.from_pretrained(..., quantization_config=bnb_config)
84
+ # note: don't pass init_lora_weights="loftq" or loftq_config!
85
+ lora_config = LoraConfig(task_type="CAUSAL_LM")
86
+ peft_model = get_peft_model(base_model, lora_config)
87
+ replace_lora_weights_loftq(peft_model)
88
+ ```
89
+
90
+ `replace_lora_weights_loftq` also allows you to pass a `callback` argument to give you more control over which layers should be modified or not, which empirically can improve the results quite a lot. To see a more elaborate example of this, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb).
91
+
92
+ `replace_lora_weights_loftq` implements only one iteration step of LoftQ. This means that only the LoRA weights are updated, instead of iteratevily updating LoRA weights and quantized base model weights. This may lead to lower performance but has the advantage that we can use the original quantized weights derived from the base model, instead of having to keep an extra copy of modified quantized weights. Whether this tradeoff is worthwhile depends on the use case.
93
+
94
+ At the moment, `replace_lora_weights_loftq` has these additional limitations:
95
+
96
+ - Model files must be stored as a `safetensors` file.
97
+ - Only bitsandbytes 4bit quantization is supported.
98
+
99
+ <Tip>
100
+
101
+ Learn more about how PEFT works with quantization in the [Quantization](quantization) guide.
102
+
103
+ </Tip>
104
+
105
+ ### Rank-stabilized LoRA
106
+
107
+ Another way to initialize [`LoraConfig`] is with the [rank-stabilized LoRA (rsLoRA)](https://huggingface.co/papers/2312.03732) method. The LoRA architecture scales each adapter during every forward pass by a fixed scalar which is set at initialization and depends on the rank `r`. The scalar is given by `lora_alpha/r` in the original implementation, but rsLoRA uses `lora_alpha/math.sqrt(r)` which stabilizes the adapters and increases the performance potential from using a higher `r`.
108
+
109
+ ```py
110
+ from peft import LoraConfig
111
+
112
+ config = LoraConfig(use_rslora=True, ...)
113
+ ```
114
+
115
+ ### Weight-Decomposed Low-Rank Adaptation (DoRA)
116
+
117
+ This technique decomposes the updates of the weights into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is handled by a separate learnable parameter. This can improve the performance of LoRA, especially at low ranks. For more information on DoRA, see https://arxiv.org/abs/2402.09353.
118
+
119
+ ```py
120
+ from peft import LoraConfig
121
+
122
+ config = LoraConfig(use_dora=True, ...)
123
+ ```
124
+
125
+ If parts of the model or the DoRA adapter are offloaded to CPU you can get a significant speedup at the cost of some temporary (ephemeral) VRAM overhead by using `ephemeral_gpu_offload=True` in `config.runtime_config`.
126
+
127
+ ```py
128
+ from peft import LoraConfig, LoraRuntimeConfig
129
+
130
+ config = LoraConfig(use_dora=True, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=True), ...)
131
+ ```
132
+
133
+ A `PeftModel` with a DoRA adapter can also be loaded with `ephemeral_gpu_offload=True` flag using the `from_pretrained` method as well as the `load_adapter` method.
134
+
135
+ ```py
136
+ from peft import PeftModel
137
+
138
+ model = PeftModel.from_pretrained(base_model, peft_model_id, ephemeral_gpu_offload=True)
139
+ ```
140
+
141
+ #### Caveats
142
+
143
+ - DoRA only supports linear and Conv2d layers at the momement.
144
+ - DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference, see [`LoraModel.merge_and_unload`].
145
+ - DoRA should work with weights quantized with bitsandbytes ("QDoRA"). However, issues have been reported when using QDoRA with DeepSpeed Zero2.
146
+
147
+ ### QLoRA-style training
148
+
149
+ The default LoRA settings in PEFT add trainable weights to the query and value layers of each attention block. But [QLoRA](https://hf.co/papers/2305.14314), which adds trainable weights to all the linear layers of a transformer model, can provide performance equal to a fully finetuned model. To apply LoRA to all the linear layers, like in QLoRA, set `target_modules="all-linear"` (easier than specifying individual modules by name which can vary depending on the architecture).
150
+
151
+ ```py
152
+ config = LoraConfig(target_modules="all-linear", ...)
153
+ ```
154
+
155
+ ### Memory efficient Layer Replication with LoRA
156
+
157
+ An approach used to improve the performance of models is to expand a model by duplicating layers in the model to build a larger model from a pretrained model of a given size. For example increasing a 7B model to a 10B model as described in the [SOLAR](https://arxiv.org/abs/2312.15166) paper. PEFT LoRA supports this kind of expansion in a memory efficient manner that supports further fine-tuning using LoRA adapters attached to the layers post replication of the layers. The replicated layers do not take additional memory as they share the underlying weights so the only additional memory required is the memory for the adapter weights. To use this feature you would create a config with the `layer_replication` argument.
158
+
159
+ ```py
160
+ config = LoraConfig(layer_replication=[[0,4], [2,5]], ...)
161
+ ```
162
+
163
+ Assuming the original model had 5 layers `[0, 1, 2 ,3, 4]`, this would create a model with 7 layers arranged as `[0, 1, 2, 3, 2, 3, 4]`. This follows the [mergekit](https://github.com/arcee-ai/mergekit) pass through merge convention where sequences of layers specified as start inclusive and end exclusive tuples are stacked to build the final model. Each layer in the final model gets its own distinct set of LoRA adpaters.
164
+
165
+ [Fewshot-Metamath-OrcaVicuna-Mistral-10B](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B) is an example of a model trained using this method on Mistral-7B expanded to 10B. The
166
+ [adapter_config.json](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B/blob/main/adapter_config.json) shows a sample LoRA adapter config applying this method for fine-tuning.
167
+
168
+ ## Optimizers
169
+
170
+ LoRA training can optionally include special purpose optimizers. Currently the only such optimizer is LoRA+.
171
+
172
+ ### LoRA+ optimized LoRA
173
+
174
+ LoRA training can be optimized using [LoRA+](https://arxiv.org/abs/2402.12354), which uses different learning rates for the adapter matrices A and B, shown to increase finetuning speed by up to 2x and performance by 1-2%.
175
+
176
+ ```py
177
+ from peft import LoraConfig, get_peft_model
178
+ from peft.optimizers import create_loraplus_optimizer
179
+ from transformers import Trainer
180
+ import bitsandbytes as bnb
181
+
182
+ base_model = ...
183
+ config = LoraConfig(...)
184
+ model = get_peft_model(base_model, config)
185
+
186
+ optimizer = create_loraplus_optimizer(
187
+ model=model,
188
+ optimizer_cls=bnb.optim.Adam8bit,
189
+ lr=5e-5,
190
+ loraplus_lr_ratio=16,
191
+ )
192
+ scheduler = None
193
+
194
+ ...
195
+ trainer = Trainer(
196
+ ...,
197
+ optimizers=(optimizer, scheduler),
198
+ )
199
+ ```
200
+
201
+ ## Merge LoRA weights into the base model
202
+
203
+ While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA adapter. To eliminate latency, use the [`~LoraModel.merge_and_unload`] function to merge the adapter weights with the base model. This allows you to use the newly merged model as a standalone model. The [`~LoraModel.merge_and_unload`] function doesn't keep the adapter weights in memory.
204
+
205
+ Below is a diagram that explains the intuition of LoRA adapter merging:
206
+
207
+ <div class="flex justify-center">
208
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"/>
209
+ </div>
210
+
211
+ We show in the snippets below how to run that using PEFT.
212
+
213
+ ```py
214
+ from transformers import AutoModelForCausalLM
215
+ from peft import PeftModel
216
+
217
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
218
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
219
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
220
+ model.merge_and_unload()
221
+ ```
222
+
223
+ If you need to keep a copy of the weights so you can unmerge the adapter later or delete and load different ones, you should use the [`~LoraModel.merge_adapter`] function instead. Now you have the option to use [`~LoraModel.unmerge_adapter`] to return the base model.
224
+
225
+ ```py
226
+ from transformers import AutoModelForCausalLM
227
+ from peft import PeftModel
228
+
229
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
230
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
231
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
232
+ model.merge_adapter()
233
+
234
+ # unmerge the LoRA layers from the base model
235
+ model.unmerge_adapter()
236
+ ```
237
+
238
+ The [`~LoraModel.add_weighted_adapter`] function is useful for merging multiple LoRAs into a new adapter based on a user provided weighting scheme in the `weights` parameter. Below is an end-to-end example.
239
+
240
+ First load the base model:
241
+
242
+ ```python
243
+ from transformers import AutoModelForCausalLM
244
+ from peft import PeftModel
245
+ import torch
246
+
247
+ base_model = AutoModelForCausalLM.from_pretrained(
248
+ "mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, device_map="auto"
249
+ )
250
+ ```
251
+
252
+ Then we load the first adapter:
253
+
254
+ ```python
255
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
256
+ model = PeftModel.from_pretrained(base_model, peft_model_id, adapter_name="sft")
257
+ ```
258
+
259
+ Then load a different adapter and merge it with the first one:
260
+
261
+ ```python
262
+ weighted_adapter_name = "sft-dpo"
263
+ model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
264
+ model.add_weighted_adapter(
265
+ adapters=["sft", "dpo"],
266
+ weights=[0.7, 0.3],
267
+ adapter_name=weighted_adapter_name,
268
+ combination_type="linear"
269
+ )
270
+ model.set_adapter(weighted_adapter_name)
271
+ ```
272
+
273
+ <Tip>
274
+
275
+ There are several supported methods for `combination_type`. Refer to the [documentation](../package_reference/lora#peft.LoraModel.add_weighted_adapter) for more details. Note that "svd" as the `combination_type` is not supported when using `torch.float16` or `torch.bfloat16` as the datatype.
276
+
277
+ </Tip>
278
+
279
+ Now, perform inference:
280
+
281
+ ```python
282
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
283
+
284
+ prompt = "Hey, are you conscious? Can you talk to me?"
285
+ inputs = tokenizer(prompt, return_tensors="pt")
286
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
287
+
288
+ with torch.no_grad():
289
+ generate_ids = model.generate(**inputs, max_length=30)
290
+ outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
291
+ print(outputs)
292
+ ```
293
+
294
+ ## Load adapters
295
+
296
+ Adapters can be loaded onto a pretrained model with [`~PeftModel.load_adapter`], which is useful for trying out different adapters whose weights aren't merged. Set the active adapter weights with the [`~LoraModel.set_adapter`] function.
297
+
298
+ ```py
299
+ from transformers import AutoModelForCausalLM
300
+ from peft import PeftModel
301
+
302
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
303
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
304
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
305
+
306
+ # load different adapter
307
+ model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
308
+
309
+ # set adapter as active
310
+ model.set_adapter("dpo")
311
+ ```
312
+
313
+ To return the base model, you could use [`~LoraModel.unload`] to unload all of the LoRA modules or [`~LoraModel.delete_adapter`] to delete the adapter entirely.
314
+
315
+ ```py
316
+ # unload adapter
317
+ model.unload()
318
+
319
+ # delete adapter
320
+ model.delete_adapter("dpo")
321
+ ```
322
+
323
+ ## Inference with different LoRA adapters in the same batch
324
+
325
+ Normally, each inference batch has to use the same adapter(s) in PEFT. This can sometimes be annoying, because we may have batches that contain samples intended to be used with different LoRA adapters. For example, we could have a base model that works well in English and two more LoRA adapters, one for French and one for German. Usually, we would have to split our batches such that each batch only contains samples of one of the languages, we cannot combine different languages in the same batch.
326
+
327
+ Thankfully, it is possible to mix different LoRA adapters in the same batch using the `adapter_name` argument. Below, we show an examle of how this works in practice. First, let's load the base model, English, and the two adapters, French and German, like this:
328
+
329
+ ```python
330
+ from transformers import AutoTokenizer, AutoModelForCausalLM
331
+ from peft import PeftModel
332
+
333
+ model_id = ...
334
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
335
+
336
+ model = AutoModelForCausalLM.from_pretrained(model_id)
337
+ # load the LoRA adapter for French
338
+ peft_model = PeftModel.from_pretrained(model, <path>, adapter_name="adapter_fr")
339
+ # next, load the LoRA adapter for German
340
+ peft_model.load_adapter(<path>, adapter_name="adapter_de")
341
+ ```
342
+
343
+ Now, we want to generate text on a sample that contains all three languages: The first three samples are in English, the next three are in French, and the last three are in German. We can use the `adapter_names` argument to specify which adapter to use for each sample. Since our base model is used for English, we use the special string `"__base__"` for these samples. For the next three samples, we indicate the adapter name of the French LoRA fine-tune, in this case `"adapter_fr"`. For the last three samples, we indicate the adapter name of the German LoRA fine-tune, in this case `"adapter_de"`. This way, we can use the base model and the two adapters in a single batch.
344
+
345
+ ```python
346
+ inputs = tokenizer(
347
+ [
348
+ "Hello, my dog is cute",
349
+ "Hello, my cat is awesome",
350
+ "Hello, my fish is great",
351
+ "Salut, mon chien est mignon",
352
+ "Salut, mon chat est génial",
353
+ "Salut, mon poisson est super",
354
+ "Hallo, mein Hund ist süß",
355
+ "Hallo, meine Katze ist toll",
356
+ "Hallo, mein Fisch ist großartig",
357
+ ],
358
+ return_tensors="pt",
359
+ padding=True,
360
+ )
361
+
362
+ adapter_names = [
363
+ "__base__", "__base__", "__base__",
364
+ "adapter_fr", "adapter_fr", "adapter_fr",
365
+ "adapter_de", "adapter_de", "adapter_de",
366
+ ]
367
+ output = peft_model.generate(**inputs, adapter_names=adapter_names, max_new_tokens=20)
368
+ ```
369
+
370
+ Note that the order does not matter here, i.e. the samples in the batch don't need to be grouped by adapter as in the example above. We just need to ensure that the `adapter_names` argument is aligned correctly with the samples.
371
+
372
+ ### Caveats
373
+
374
+ Using this features has some drawbacks, namely:
375
+
376
+ - It only works for inference, not for training.
377
+ - Disabling adapters using the `with model.disable_adapter()` context takes precedence over `adapter_names`.
378
+ - You cannot pass `adapter_names` when some adapter weights where merged with base weight using the `merge_adapter` method. Please unmerge all adapters first by calling `model.unmerge_adapter()`.
379
+ - For obvious reasons, this cannot be used after calling `merge_and_unload()`, since all the LoRA adapters will be merged into the base weights in this case.
380
+ - This feature does not currently work with DoRA, so set `use_dora=False` in your `LoraConfig` if you want to use it.
381
+ - There is an expected overhead for inference with `adapter_names`, especially if the amount of different adapters in the batch is high. This is because the batch size is effectively reduced to the number of samples per adapter. If runtime performance is your top priority, try the following:
382
+ - Increase the batch size.
383
+ - Try to avoid having a large number of different adapters in the same batch, prefer homogeneous batches. This can be achieved by buffering samples with the same adapter and only perform inference with a small handfull of different adapters.
384
+ - Take a look at alternative implementations such as [LoRAX](https://github.com/predibase/lorax), [punica](https://github.com/punica-ai/punica), or [S-LoRA](https://github.com/S-LoRA/S-LoRA), which are specialized to work with a large number of different adapters.
peft_md_files/developer_guides/low_level_api.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Adapter injection
18
+
19
+ With PEFT, you can inject trainable adapters into any `torch` module which allows you to use adapter methods without relying on the modeling classes in PEFT. Currently, PEFT supports injecting [LoRA](../conceptual_guides/adapter#low-rank-adaptation-lora), [AdaLoRA](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora), and [IA3](../conceptual_guides/ia3) into models because for these adapters, inplace modification of the model is sufficient for finetuning it.
20
+
21
+ Check the table below to see when you should inject adapters.
22
+
23
+ | Pros | Cons |
24
+ |---|---|
25
+ | the model is modified inplace, keeping all the original attributes and methods | manually write the `from_pretrained` and `save_pretrained` utility functions from Hugging Face to save and load adapters |
26
+ | works for any `torch` module and modality | doesn't work with any of the utility methods provided by `PeftModel` such as disabling and merging adapters |
27
+
28
+ To perform the adapter injection, use the [`inject_adapter_in_model`] method. This method takes 3 arguments, the PEFT config, the model, and an optional adapter name. You can also attach multiple adapters to the model if you call [`inject_adapter_in_model`] multiple times with different adapter names.
29
+
30
+ For example, to inject LoRA adapters into the `linear` submodule of the `DummyModel` module:
31
+
32
+ ```python
33
+ import torch
34
+ from peft import inject_adapter_in_model, LoraConfig
35
+
36
+ class DummyModel(torch.nn.Module):
37
+ def __init__(self):
38
+ super().__init__()
39
+ self.embedding = torch.nn.Embedding(10, 10)
40
+ self.linear = torch.nn.Linear(10, 10)
41
+ self.lm_head = torch.nn.Linear(10, 10)
42
+
43
+ def forward(self, input_ids):
44
+ x = self.embedding(input_ids)
45
+ x = self.linear(x)
46
+ x = self.lm_head(x)
47
+ return x
48
+
49
+
50
+ lora_config = LoraConfig(
51
+ lora_alpha=16,
52
+ lora_dropout=0.1,
53
+ r=64,
54
+ bias="none",
55
+ target_modules=["linear"],
56
+ )
57
+
58
+ model = DummyModel()
59
+ model = inject_adapter_in_model(lora_config, model)
60
+
61
+ dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]])
62
+ dummy_outputs = model(dummy_inputs)
63
+ ```
64
+
65
+ Print the model to see that the adapters have been correctly injected.
66
+
67
+ ```bash
68
+ DummyModel(
69
+ (embedding): Embedding(10, 10)
70
+ (linear): Linear(
71
+ in_features=10, out_features=10, bias=True
72
+ (lora_dropout): ModuleDict(
73
+ (default): Dropout(p=0.1, inplace=False)
74
+ )
75
+ (lora_A): ModuleDict(
76
+ (default): Linear(in_features=10, out_features=64, bias=False)
77
+ )
78
+ (lora_B): ModuleDict(
79
+ (default): Linear(in_features=64, out_features=10, bias=False)
80
+ )
81
+ (lora_embedding_A): ParameterDict()
82
+ (lora_embedding_B): ParameterDict()
83
+ )
84
+ (lm_head): Linear(in_features=10, out_features=10, bias=True)
85
+ )
86
+ ```
87
+
88
+ To only save the adapter, use the [`get_peft_model_state_dict`] function:
89
+
90
+ ```python
91
+ from peft import get_peft_model_state_dict
92
+
93
+ peft_state_dict = get_peft_model_state_dict(model)
94
+ print(peft_state_dict)
95
+ ```
96
+
97
+ Otherwise, `model.state_dict()` returns the full state dict of the model.
peft_md_files/developer_guides/mixed_models.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+ -->
12
+
13
+ # Mixed adapter types
14
+
15
+ Normally, it isn't possible to mix different adapter types in 🤗 PEFT. You can create a PEFT model with two different LoRA adapters (which can have different config options), but it is not possible to combine a LoRA and LoHa adapter. With [`PeftMixedModel`] however, this works as long as the adapter types are compatible. The main purpose of allowing mixed adapter types is to combine trained adapters for inference. While it is possible to train a mixed adapter model, this has not been tested and is not recommended.
16
+
17
+ To load different adapter types into a PEFT model, use [`PeftMixedModel`] instead of [`PeftModel`]:
18
+
19
+ ```py
20
+ from peft import PeftMixedModel
21
+
22
+ base_model = ... # load the base model, e.g. from transformers
23
+ # load first adapter, which will be called "default"
24
+ peft_model = PeftMixedModel.from_pretrained(base_model, <path_to_adapter1>)
25
+ peft_model.load_adapter(<path_to_adapter2>, adapter_name="other")
26
+ peft_model.set_adapter(["default", "other"])
27
+ ```
28
+
29
+ The [`~PeftMixedModel.set_adapter`] method is necessary to activate both adapters, otherwise only the first adapter would be active. You can keep adding more adapters by calling [`~PeftModel.add_adapter`] repeatedly.
30
+
31
+ [`PeftMixedModel`] does not support saving and loading mixed adapters. The adapters should already be trained, and loading the model requires a script to be run each time.
32
+
33
+ ## Tips
34
+
35
+ - Not all adapter types can be combined. See [`peft.tuners.mixed.COMPATIBLE_TUNER_TYPES`](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/tuners/mixed/model.py#L35) for a list of compatible types. An error will be raised if you try to combine incompatible adapter types.
36
+ - It is possible to mix multiple adapters of the same type which can be useful for combining adapters with very different configs.
37
+ - If you want to combine a lot of different adapters, the most performant way to do it is to consecutively add the same adapter types. For example, add LoRA1, LoRA2, LoHa1, LoHa2 in this order, instead of LoRA1, LoHa1, LoRA2, and LoHa2. While the order can affect the output, there is no inherently *best* order, so it is best to choose the fastest one.
peft_md_files/developer_guides/model_merging.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Model merging
18
+
19
+ Training a model for each task can be costly, take up storage space, and the models aren't able to learn new information to improve their performance. Multitask learning can overcome some of these limitations by training a model to learn several tasks, but it is expensive to train and designing a dataset for it is challenging. *Model merging* offers a solution to these challenges by combining multiple pretrained models into one model, giving it the combined abilities of each individual model without any additional training.
20
+
21
+ PEFT provides several methods for merging models like a linear or SVD combination. This guide focuses on two methods that are more efficient for merging LoRA adapters by eliminating redundant parameters:
22
+
23
+ * [TIES](https://hf.co/papers/2306.01708) - TrIm, Elect, and Merge (TIES) is a three-step method for merging models. First, redundant parameters are trimmed, then conflicting signs are resolved into an aggregated vector, and finally the parameters whose signs are the same as the aggregate sign are averaged. This method takes into account that some values (redundant and sign disagreement) can degrade performance in the merged model.
24
+ * [DARE](https://hf.co/papers/2311.03099) - Drop And REscale is a method that can be used to prepare for other model merging methods like TIES. It works by randomly dropping parameters according to a drop rate and rescaling the remaining parameters. This helps to reduce the number of redundant and potentially interfering parameters among multiple models.
25
+
26
+ Models are merged with the [`~LoraModel.add_weighted_adapter`] method, and the specific model merging method is specified in the `combination_type` parameter.
27
+
28
+ ## Merge method
29
+
30
+ With TIES and DARE, merging is enabled by setting `combination_type` and `density` to a value of the weights to keep from the individual models. For example, let's merge three finetuned [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) models: [tinyllama_lora_nobots](https://huggingface.co/smangrul/tinyllama_lora_norobots), [tinyllama_lora_sql](https://huggingface.co/smangrul/tinyllama_lora_sql), and [tinyllama_lora_adcopy](https://huggingface.co/smangrul/tinyllama_lora_adcopy).
31
+
32
+ <Tip warninig={true}>
33
+
34
+ When you're attempting to merge fully trained models with TIES, you should be aware of any special tokens each model may have added to the embedding layer which are not a part of the original checkpoint's vocabulary. This may cause an issue because each model may have added a special token to the same embedding position. If this is the case, you should use the [`~transformers.PreTrainedModel.resize_token_embeddings`] method to avoid merging the special tokens at the same embedding index.
35
+
36
+ <br>
37
+
38
+ This shouldn't be an issue if you're only merging LoRA adapters trained from the same base model.
39
+
40
+ </Tip>
41
+
42
+ Load a base model and can use the [`~PeftModel.load_adapter`] method to load and assign each adapter a name:
43
+
44
+ ```py
45
+ from peft import PeftConfig, PeftModel
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+ import torch
48
+
49
+ config = PeftConfig.from_pretrained("smangrul/tinyllama_lora_norobots")
50
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map="auto").eval()
51
+ tokenizer = AutoTokenizer.from_pretrained("smangrul/tinyllama_lora_norobots")
52
+
53
+ model = PeftModel.from_pretrained(model, "smangrul/tinyllama_lora_norobots", adapter_name="norobots")
54
+ _ = model.load_adapter("smangrul/tinyllama_lora_sql", adapter_name="sql")
55
+ _ = model.load_adapter("smangrul/tinyllama_lora_adcopy", adapter_name="adcopy")
56
+ ```
57
+
58
+ Set the adapters, weights, `adapter_name`, `combination_type`, and `density` with the [`~LoraModel.add_weighted_adapter`] method.
59
+
60
+ <hfoptions id="merge-method">
61
+ <hfoption id="TIES">
62
+
63
+ Weight values greater than `1.0` typically produce better results because they preserve the correct scale. A good default starting value for the weights is to set all values to `1.0`.
64
+
65
+ ```py
66
+ adapters = ["norobots", "adcopy", "sql"]
67
+ weights = [2.0, 1.0, 1.0]
68
+ adapter_name = "merge"
69
+ density = 0.2
70
+ model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="ties", density=density)
71
+ ```
72
+
73
+ </hfoption>
74
+ <hfoption id="DARE">
75
+
76
+ ```py
77
+ adapters = ["norobots", "adcopy", "sql"]
78
+ weights = [2.0, 0.3, 0.7]
79
+ adapter_name = "merge"
80
+ density = 0.2
81
+ model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="dare_ties", density=density)
82
+ ```
83
+
84
+ </hfoption>
85
+ </hfoptions>
86
+
87
+ Set the newly merged model as the active model with the [`~LoraModel.set_adapter`] method.
88
+
89
+ ```py
90
+ model.set_adapter("merge")
91
+ ```
92
+
93
+ Now you can use the merged model as an instruction-tuned model to write ad copy or SQL queries!
94
+
95
+ <hfoptions id="ties">
96
+ <hfoption id="instruct">
97
+
98
+ ```py
99
+ messages = [
100
+ {"role": "user", "content": "Write an essay about Generative AI."},
101
+ ]
102
+ text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
103
+ inputs = tokenizer(text, return_tensors="pt")
104
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
105
+ outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
106
+ print(tokenizer.decode(outputs[0]))
107
+ ```
108
+
109
+ </hfoption>
110
+ <hfoption id="ad copy">
111
+
112
+ ```py
113
+ messages = [
114
+ {"role": "system", "content": "Create a text ad given the following product and description."},
115
+ {"role": "user", "content": "Product: Sony PS5 PlayStation Console\nDescription: The PS5 console unleashes new gaming possibilities that you never anticipated."},
116
+ ]
117
+ text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
118
+ inputs = tokenizer(text, return_tensors="pt")
119
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
120
+ outputs = model.generate(**inputs, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
121
+ print(tokenizer.decode(outputs[0]))
122
+ ```
123
+
124
+ </hfoption>
125
+ <hfoption id="SQL">
126
+
127
+ ```py
128
+ text = """Table: 2-11365528-2
129
+ Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location']
130
+ Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic?
131
+ SQL Query:"""
132
+
133
+ inputs = tokenizer(text, return_tensors="pt")
134
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
135
+ outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1, eos_token_id=tokenizer("</s>").input_ids[-1])
136
+ print(tokenizer.decode(outputs[0]))
137
+ ```
138
+
139
+ </hfoption>
140
+ </hfoptions>
141
+
142
+
143
+ ## Merging (IA)³ Models
144
+ The (IA)³ models facilitate linear merging of adapters. To merge adapters in an (IA)³ model, utilize the `add_weighted_adapter` method from the `IA3Model` class. This method is analogous to the `add_weighted_adapter` method used in `LoraModel`, with the key difference being the absence of the `combination_type` parameter. For example, to merge three (IA)³ adapters into a PEFT model, you would proceed as follows:
145
+
146
+ ```py
147
+ adapters = ["adapter1", "adapter2", "adapter3"]
148
+ weights = [0.4, 0.3, 0.3]
149
+ adapter_name = "merge"
150
+ model.add_weighted_adapter(adapters, weights, adapter_name)
151
+ ```
152
+
153
+ It is recommended that the weights sum to 1.0 to preserve the scale of the model. The merged model can then be set as the active model using the `set_adapter` method:
154
+
155
+ ```py
156
+ model.set_adapter("merge")
157
+ ```
peft_md_files/developer_guides/quantization.md ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Quantization
18
+
19
+ Quantization represents data with fewer bits, making it a useful technique for reducing memory-usage and accelerating inference especially when it comes to large language models (LLMs). There are several ways to quantize a model including:
20
+
21
+ * optimizing which model weights are quantized with the [AWQ](https://hf.co/papers/2306.00978) algorithm
22
+ * independently quantizing each row of a weight matrix with the [GPTQ](https://hf.co/papers/2210.17323) algorithm
23
+ * quantizing to 8-bit and 4-bit precision with the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library
24
+ * quantizing to as low as 2-bit precision with the [AQLM](https://arxiv.org/abs/2401.06118) algorithm
25
+
26
+ However, after a model is quantized it isn't typically further trained for downstream tasks because training can be unstable due to the lower precision of the weights and activations. But since PEFT methods only add *extra* trainable parameters, this allows you to train a quantized model with a PEFT adapter on top! Combining quantization with PEFT can be a good strategy for training even the largest models on a single GPU. For example, [QLoRA](https://hf.co/papers/2305.14314) is a method that quantizes a model to 4-bits and then trains it with LoRA. This method allows you to finetune a 65B parameter model on a single 48GB GPU!
27
+
28
+ In this guide, you'll see how to quantize a model to 4-bits and train it with LoRA.
29
+
30
+ ## Quantize a model
31
+
32
+ [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is a quantization library with a Transformers integration. With this integration, you can quantize a model to 8 or 4-bits and enable many other options by configuring the [`~transformers.BitsAndBytesConfig`] class. For example, you can:
33
+
34
+ * set `load_in_4bit=True` to quantize the model to 4-bits when you load it
35
+ * set `bnb_4bit_quant_type="nf4"` to use a special 4-bit data type for weights initialized from a normal distribution
36
+ * set `bnb_4bit_use_double_quant=True` to use a nested quantization scheme to quantize the already quantized weights
37
+ * set `bnb_4bit_compute_dtype=torch.bfloat16` to use bfloat16 for faster computation
38
+
39
+ ```py
40
+ import torch
41
+ from transformers import BitsAndBytesConfig
42
+
43
+ config = BitsAndBytesConfig(
44
+ load_in_4bit=True,
45
+ bnb_4bit_quant_type="nf4",
46
+ bnb_4bit_use_double_quant=True,
47
+ bnb_4bit_compute_dtype=torch.bfloat16,
48
+ )
49
+ ```
50
+
51
+ Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
52
+
53
+ ```py
54
+ from transformers import AutoModelForCausalLM
55
+
56
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
57
+ ```
58
+
59
+ Next, you should call the [`~peft.utils.prepare_model_for_kbit_training`] function to preprocess the quantized model for training.
60
+
61
+ ```py
62
+ from peft import prepare_model_for_kbit_training
63
+
64
+ model = prepare_model_for_kbit_training(model)
65
+ ```
66
+
67
+ Now that the quantized model is ready, let's set up a configuration.
68
+
69
+ ## LoraConfig
70
+
71
+ Create a [`LoraConfig`] with the following parameters (or choose your own):
72
+
73
+ ```py
74
+ from peft import LoraConfig
75
+
76
+ config = LoraConfig(
77
+ r=16,
78
+ lora_alpha=8,
79
+ target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
80
+ lora_dropout=0.05,
81
+ bias="none",
82
+ task_type="CAUSAL_LM"
83
+ )
84
+ ```
85
+
86
+ Then use the [`get_peft_model`] function to create a [`PeftModel`] from the quantized model and configuration.
87
+
88
+ ```py
89
+ from peft import get_peft_model
90
+
91
+ model = get_peft_model(model, config)
92
+ ```
93
+
94
+ You're all set for training with whichever training method you prefer!
95
+
96
+ ### LoftQ initialization
97
+
98
+ [LoftQ](https://hf.co/papers/2310.08659) initializes LoRA weights such that the quantization error is minimized, and it can improve performance when training quantized models. To get started, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
99
+
100
+ In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
101
+
102
+ ### QLoRA-style training
103
+
104
+ QLoRA adds trainable weights to all the linear layers in the transformer architecture. Since the attribute names for these linear layers can vary across architectures, set `target_modules` to `"all-linear"` to add LoRA to all the linear layers:
105
+
106
+ ```py
107
+ config = LoraConfig(target_modules="all-linear", ...)
108
+ ```
109
+
110
+ ## AQLM quantization
111
+
112
+ Additive Quantization of Language Models ([AQLM](https://arxiv.org/abs/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. This allows it to compress models down to as low as 2-bit with considerably low accuracy losses.
113
+
114
+ Since the AQLM quantization process is computationally expensive, a use of prequantized models is recommended. A partial list of available models can be found in the official aqlm [repository](https://github.com/Vahe1994/AQLM).
115
+
116
+ The models support LoRA adapter tuning. To tune the quantized model you'll need to install the `aqlm` inference library: `pip install aqlm>=1.0.2`. Finetuned LoRA adapters shall be saved separately, as merging them with AQLM quantized weights is not possible.
117
+
118
+ ```py
119
+ quantized_model = AutoModelForCausalLM.from_pretrained(
120
+ "BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf-test-dispatch",
121
+ torch_dtype="auto", device_map="auto", low_cpu_mem_usage=True,
122
+ )
123
+
124
+ peft_config = LoraConfig(...)
125
+
126
+ quantized_model = get_peft_model(quantized_model, peft_config)
127
+ ```
128
+
129
+ You can refer to the [Google Colab](https://colab.research.google.com/drive/12GTp1FCj5_0SnnNQH18h_2XFh9vS_guX?usp=sharing) example for an overview of AQLM+LoRA finetuning.
130
+
131
+ ## EETQ quantization
132
+
133
+ You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
134
+
135
+ ```py
136
+ import torch
137
+ from transformers import EetqConfig
138
+
139
+ config = EetqConfig("int8")
140
+ ```
141
+
142
+ Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
143
+
144
+ ```py
145
+ from transformers import AutoModelForCausalLM
146
+
147
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
148
+ ```
149
+
150
+ and create a `LoraConfig` and pass it to `get_peft_model`:
151
+
152
+ ```py
153
+ from peft import LoraConfig, get_peft_model
154
+
155
+ config = LoraConfig(
156
+ r=16,
157
+ lora_alpha=8,
158
+ target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
159
+ lora_dropout=0.05,
160
+ bias="none",
161
+ task_type="CAUSAL_LM"
162
+ )
163
+
164
+ model = get_peft_model(model, config)
165
+ ```
166
+
167
+ ## HQQ quantization
168
+
169
+ The models that is quantized using Half-Quadratic Quantization of Large Machine Learning Models ([HQQ](https://mobiusml.github.io/hqq_blog/)) support LoRA adapter tuning. To tune the quantized model, you'll need to install the `hqq` library with: `pip install hqq`.
170
+
171
+ ```py
172
+ from hqq.engine.hf import HQQModelForCausalLM
173
+
174
+ quantized_model = HQQModelForCausalLM.from_quantized(save_dir_or_hfhub, device='cuda')
175
+
176
+ peft_config = LoraConfig(...)
177
+
178
+ quantized_model = get_peft_model(quantized_model, peft_config)
179
+ ```
180
+
181
+ Or using transformers version that is compatible with HQQ (e.g. by installing it from latest pypi or from source).
182
+
183
+ ```python
184
+ from transformers import HqqConfig, AutoModelForCausalLM
185
+
186
+ quant_config = HqqConfig(nbits=4, group_size=64)
187
+
188
+ quantized_model = AutoModelForCausalLM.from_pretrained(save_dir_or_hfhub, device='cuda', quantization_config=quant_config)
189
+
190
+ peft_config = LoraConfig(...)
191
+
192
+ quantized_model = get_peft_model(quantized_model, peft_config)
193
+ ```
194
+
195
+ ## Next steps
196
+
197
+ If you're interested in learning more about quantization, the following may be helpful:
198
+
199
+ * Learn more about details about QLoRA and check out some benchmarks on its impact in the [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) blog post.
200
+ * Read more about different quantization schemes in the Transformers [Quantization](https://hf.co/docs/transformers/main/quantization) guide.
peft_md_files/developer_guides/torch_compile.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # torch.compile
18
+
19
+ In PEFT, [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) works for some but not all features. The reason why it won't always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for `torch.compile`. In other places, `torch.compile` may work, but won't be as fast as expected because of graph breaks.
20
+
21
+ If you don't see an error, it doesn't necessarily mean that `torch.compile` worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with `torch.compile` and what doesn't.
22
+
23
+ > [!TIP]
24
+ > Unless indicated otherwise, the default `torch.compile` settings were used.
25
+
26
+ ## Training and inference with `torch.compile`
27
+
28
+ These features **work** with `torch.compile`. Everything listed below was tested with a causal LM:
29
+
30
+ - Training with `Trainer` from 🤗 transformers
31
+ - Training with a custom PyTorch loop
32
+ - Inference
33
+ - Generation
34
+
35
+ The following adapters were tested successfully:
36
+
37
+ - AdaLoRA
38
+ - BOFT
39
+ - IA³
40
+ - Layer Norm Tuning
41
+ - LoHa
42
+ - LoRA
43
+ - LoRA + DoRA
44
+ - OFT
45
+ - VeRA
46
+ - HRA
47
+
48
+ The following adapters **don't work** correctly for training or inference when using `torch.compile`:
49
+
50
+ - LoKr
51
+ - LoRA targeting embedding layers
52
+
53
+ ## Advanced PEFT features with `torch.compile`
54
+
55
+ Below are some of the more advanced PEFT features that **work**. They were all tested with LoRA.
56
+
57
+ - `modules_to_save` (i.e. `config = LoraConfig(..., modules_to_save=...)`)
58
+ - Merging adapters (one or multiple)
59
+ - Merging multiple adapters into one adapter (i.e. calling `model.add_weighted_adapter(...)`)
60
+
61
+ Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type.
62
+
63
+ The more advanced PEFT features below **don't work** in conjunction with `torch.compile`. Tests were run with LoRA:
64
+
65
+ - Using PEFT adapters with quantization (bitsandbytes)
66
+ - Inference with multiple adapters
67
+ - Unloading (i.e. calling `model.merge_and_unload()`)
68
+ - Disabling adapters (i.e. using `with model.disable_adapter()`)
69
+ - Mixed adapter batches (i.e. calling `model(batch, adapter_names=["__base__", "default", "other", ...])`)
70
+
71
+ ## Test cases
72
+
73
+ All the use cases listed above are tested inside of [`peft/tests/test_torch_compile.py`](https://github.com/huggingface/peft/blob/main/tests/test_torch_compile.py). If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case.
74
+
75
+ > [!TIP]
76
+ > If you have another use case where you know that `torch.compile` does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.
peft_md_files/developer_guides/troubleshooting.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Troubleshooting
18
+
19
+ If you encounter any issue when using PEFT, please check the following list of common issues and their solutions.
20
+
21
+ ## Examples don't work
22
+
23
+ Examples often rely on the most recent package versions, so please ensure they're up-to-date. In particular, check the following package versions:
24
+
25
+ - `peft`
26
+ - `transformers`
27
+ - `accelerate`
28
+ - `torch`
29
+
30
+ In general, you can update the package version by running this command inside your Python environment:
31
+
32
+ ```bash
33
+ python -m pip install -U <package_name>
34
+ ```
35
+
36
+ Installing PEFT from source is useful for keeping up with the latest developments:
37
+
38
+ ```bash
39
+ python -m pip install git+https://github.com/huggingface/peft
40
+ ```
41
+
42
+ ## ValueError: Attempting to unscale FP16 gradients
43
+
44
+ This error probably occurred because the model was loaded with `torch_dtype=torch.float16` and then used in an automatic mixed precision (AMP) context, e.g. by setting `fp16=True` in the [`~transformers.Trainer`] class from 🤗 Transformers. The reason is that when using AMP, trainable weights should never use fp16. To make this work without loading the whole model in fp32, add the following to your code:
45
+
46
+ ```python
47
+ peft_model = get_peft_model(...)
48
+
49
+ # add this:
50
+ for param in model.parameters():
51
+ if param.requires_grad:
52
+ param.data = param.data.float()
53
+
54
+ # proceed as usual
55
+ trainer = Trainer(model=peft_model, fp16=True, ...)
56
+ trainer.train()
57
+ ```
58
+
59
+ Alternatively, you can use the [`~utils.cast_mixed_precision_params`] function to correctly cast the weights:
60
+
61
+ ```python
62
+ from peft import cast_mixed_precision_params
63
+
64
+ peft_model = get_peft_model(...)
65
+ cast_mixed_precision_params(peft_model, dtype=torch.float16)
66
+
67
+ # proceed as usual
68
+ trainer = Trainer(model=peft_model, fp16=True, ...)
69
+ trainer.train()
70
+ ```
71
+
72
+ <Tip>
73
+
74
+ Starting from PEFT verion v0.12.0, PEFT automatically promotes the dtype of adapter weights from `torch.float16` and `torch.bfloat16` to `torch.float32` where appropriate. To _prevent_ this behavior, you can pass `autocast_adapter_dtype=False` to [`~get_peft_model`], to [`~PeftModel.from_pretrained`], and to [`~PeftModel.load_adapter`].
75
+
76
+ </Tip>
77
+
78
+ ## Bad results from a loaded PEFT model
79
+
80
+ There can be several reasons for getting a poor result from a loaded PEFT model which are listed below. If you're still unable to troubleshoot the problem, see if anyone else had a similar [issue](https://github.com/huggingface/peft/issues) on GitHub, and if you can't find any, open a new issue.
81
+
82
+ When opening an issue, it helps a lot if you provide a minimal code example that reproduces the issue. Also, please report if the loaded model performs at the same level as the model did before fine-tuning, if it performs at a random level, or if it is only slightly worse than expected. This information helps us identify the problem more quickly.
83
+
84
+ ### Random deviations
85
+
86
+ If your model outputs are not exactly the same as previous runs, there could be an issue with random elements. For example:
87
+
88
+ 1. please ensure it is in `.eval()` mode, which is important, for instance, if the model uses dropout
89
+ 2. if you use [`~transformers.GenerationMixin.generate`] on a language model, there could be random sampling, so obtaining the same result requires setting a random seed
90
+ 3. if you used quantization and merged the weights, small deviations are expected due to rounding errors
91
+
92
+ ### Incorrectly loaded model
93
+
94
+ Please ensure that you load the model correctly. A common error is trying to load a _trained_ model with [`get_peft_model`] which is incorrect. Instead, the loading code should look like this:
95
+
96
+ ```python
97
+ from peft import PeftModel, PeftConfig
98
+
99
+ base_model = ... # to load the base model, use the same code as when you trained it
100
+ config = PeftConfig.from_pretrained(peft_model_id)
101
+ peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
102
+ ```
103
+
104
+ ### Randomly initialized layers
105
+
106
+ For some tasks, it is important to correctly configure `modules_to_save` in the config to account for randomly initialized layers.
107
+
108
+ As an example, this is necessary if you use LoRA to fine-tune a language model for sequence classification because 🤗 Transformers adds a randomly initialized classification head on top of the model. If you do not add this layer to `modules_to_save`, the classification head won't be saved. The next time you load the model, you'll get a _different_ randomly initialized classification head, resulting in completely different results.
109
+
110
+ PEFT tries to correctly guess the `modules_to_save` if you provide the `task_type` argument in the config. This should work for transformers models that follow the standard naming scheme. It is always a good idea to double check though because we can't guarantee all models follow the naming scheme.
111
+
112
+ When you load a transformers model that has randomly initialized layers, you should see a warning along the lines of:
113
+
114
+ ```
115
+ Some weights of <MODEL> were not initialized from the model checkpoint at <ID> and are newly initialized: [<LAYER_NAMES>].
116
+ You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
117
+ ```
118
+
119
+ The mentioned layers should be added to `modules_to_save` in the config to avoid the described problem.
120
+
121
+ ### Extending the vocabulary
122
+
123
+ For many language fine-tuning tasks, extending the model's vocabulary is necessary since new tokens are being introduced. This requires extending the embedding layer to account for the new tokens and also storing the embedding layer in addition to the adapter weights when saving the adapter.
124
+
125
+ Save the embedding layer by adding it to the `target_modules` of the config. The embedding layer name must follow the standard naming scheme from Transformers. For example, the Mistral config could look like this:
126
+
127
+ ```python
128
+ config = LoraConfig(..., target_modules=["embed_tokens", "lm_head", "q_proj", "v_proj"])
129
+ ```
130
+
131
+ Once added to `target_modules`, PEFT automatically stores the embedding layer when saving the adapter if the model has the [`~transformers.PreTrainedModel.get_input_embeddings`] and [`~transformers.PreTrainedModel.get_output_embeddings`]. This is generally the case for Transformers models.
132
+
133
+ If the model's embedding layer doesn't follow the Transformer's naming scheme, you can still save it by manually passing `save_embedding_layers=True` when saving the adapter:
134
+
135
+ ```python
136
+ model = get_peft_model(...)
137
+ # train the model
138
+ model.save_pretrained("my_adapter", save_embedding_layers=True)
139
+ ```
140
+
141
+ For inference, load the base model first and resize it the same way you did before you trained the model. After you've resized the base model, you can load the PEFT checkpoint.
142
+
143
+ For a complete example, please check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb).
144
+
145
+ ### Check layer and model status
146
+
147
+ Sometimes a PEFT model can end up in a bad state, especially when handling multiple adapters. There can be some confusion around what adapters exist, which one is active, which one is merged, etc. To help investigate this issue, call the [`~peft.PeftModel.get_layer_status`] and the [`~peft.PeftModel.get_model_status`] methods.
148
+
149
+ The [`~peft.PeftModel.get_layer_status`] method gives you a detailed overview of each targeted layer's active, merged, and available adapters.
150
+
151
+ ```python
152
+ >>> from transformers import AutoModel
153
+ >>> from peft import get_peft_model, LoraConfig
154
+
155
+ >>> model_id = "google/flan-t5-small"
156
+ >>> model = AutoModel.from_pretrained(model_id)
157
+ >>> model = get_peft_model(model, LoraConfig())
158
+
159
+ >>> model.get_layer_status()
160
+ [TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.q',
161
+ module_type='lora.Linear',
162
+ enabled=True,
163
+ active_adapters=['default'],
164
+ merged_adapters=[],
165
+ requires_grad={'default': True},
166
+ available_adapters=['default']),
167
+ TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.v',
168
+ module_type='lora.Linear',
169
+ enabled=True,
170
+ active_adapters=['default'],
171
+ merged_adapters=[],
172
+ requires_grad={'default': True},
173
+ available_adapters=['default']),
174
+ ...]
175
+
176
+ >>> model.get_model_status()
177
+ TunerModelStatus(
178
+ base_model_type='T5Model',
179
+ adapter_model_type='LoraModel',
180
+ peft_types={'default': 'LORA'},
181
+ trainable_params=344064,
182
+ total_params=60855680,
183
+ num_adapter_layers=48,
184
+ enabled=True,
185
+ active_adapters=['default'],
186
+ merged_adapters=[],
187
+ requires_grad={'default': True},
188
+ available_adapters=['default'],
189
+ )
190
+ ```
191
+
192
+ In the model state output, you should look out for entries that say `"irregular"`. This means PEFT detected an inconsistent state in the model. For instance, if `merged_adapters="irregular"`, it means that for at least one adapter, it was merged on some target modules but not on others. The inference results will most likely be incorrect as a result.
193
+
194
+ The best way to resolve this issue is to reload the whole model and adapter checkpoint(s). Ensure that you don't perform any incorrect operations on the model, e.g. manually merging adapters on some modules but not others.
195
+
196
+ Convert the layer status into a pandas `DataFrame` for an easier visual inspection.
197
+
198
+ ```python
199
+ from dataclasses import asdict
200
+ import pandas as pd
201
+
202
+ df = pd.DataFrame(asdict(layer) for layer in model.get_layer_status())
203
+ ```
204
+
205
+ It is possible to get this information for non-PEFT models if they are using PEFT layers under the hood, but some information like the `base_model_type` or the `peft_types` cannot be determined in that case. As an example, you can call this on a [diffusers](https://huggingface.co/docs/diffusers/index) model like so:
206
+
207
+ ```python
208
+ >>> import torch
209
+ >>> from diffusers import StableDiffusionPipeline
210
+ >>> from peft import get_model_status, get_layer_status
211
+
212
+ >>> path = "runwayml/stable-diffusion-v1-5"
213
+ >>> lora_id = "takuma104/lora-test-text-encoder-lora-target"
214
+ >>> pipe = StableDiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16)
215
+ >>> pipe.load_lora_weights(lora_id, adapter_name="adapter-1")
216
+ >>> pipe.load_lora_weights(lora_id, adapter_name="adapter-2")
217
+ >>> pipe.set_lora_device(["adapter-2"], "cuda")
218
+ >>> get_layer_status(pipe.text_encoder)
219
+ [TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.k_proj',
220
+ module_type='lora.Linear',
221
+ enabled=True,
222
+ active_adapters=['adapter-2'],
223
+ merged_adapters=[],
224
+ requires_grad={'adapter-1': False, 'adapter-2': True},
225
+ available_adapters=['adapter-1', 'adapter-2'],
226
+ devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
227
+ TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.v_proj',
228
+ module_type='lora.Linear',
229
+ enabled=True,
230
+ active_adapters=['adapter-2'],
231
+ merged_adapters=[],
232
+ requires_grad={'adapter-1': False, 'adapter-2': True},
233
+ devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
234
+ ...]
235
+
236
+ >>> get_model_status(pipe.unet)
237
+ TunerModelStatus(
238
+ base_model_type='other',
239
+ adapter_model_type='None',
240
+ peft_types={},
241
+ trainable_params=797184,
242
+ total_params=861115332,
243
+ num_adapter_layers=128,
244
+ enabled=True,
245
+ active_adapters=['adapter-2'],
246
+ merged_adapters=[],
247
+ requires_grad={'adapter-1': False, 'adapter-2': True},
248
+ available_adapters=['adapter-1', 'adapter-2'],
249
+ devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']},
250
+ )
251
+ ```
252
+
253
+ ## Reproducibility
254
+
255
+ ### Models using batch norm
256
+
257
+ When loading a trained PEFT model where the base model uses batch norm (e.g. `torch.nn.BatchNorm1d` or `torch.nn.BatchNorm2d`), you may find that you cannot reproduce the exact same outputs. This is because the batch norm layers keep track of running stats during training, but these stats are not part of the PEFT checkpoint. Therefore, when you load the PEFT model, the running stats of the base model will be used (i.e. from before training with PEFT).
258
+
259
+ Depending on your use case, this may not be a big deal. If, however, you need your outputs to be 100% reproducible, you can achieve this by adding the batch norm layers to `modules_to_save`. Below is an example of this using resnet and LoRA. Notice that we set `modules_to_save=["classifier", "normalization"]`. We need the `"classifier"` argument because our task is image classification, and we add the `"normalization"` argument to ensure that the batch norm layers are saved in the PEFT checkpoint.
260
+
261
+ ```python
262
+ from transformers import AutoModelForImageClassification
263
+ from peft import LoraConfig, get_peft_model
264
+
265
+ model_id = "microsoft/resnet-18"
266
+ base_model = AutoModelForImageClassification.from_pretrained(self.model_id)
267
+ config = LoraConfig(
268
+ target_modules=["convolution"],
269
+ modules_to_save=["classifier", "normalization"],
270
+ ),
271
+ ```
272
+
273
+ Depending on the type of model you use, the batch norm layers could have different names than `"normalization"`, so please ensure that the name matches your model architecture.
peft_md_files/index.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # PEFT
18
+
19
+ 🤗 PEFT (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting large pretrained models to various downstream applications without fine-tuning all of a model's parameters because it is prohibitively costly. PEFT methods only fine-tune a small number of (extra) model parameters - significantly decreasing computational and storage costs - while yielding performance comparable to a fully fine-tuned model. This makes it more accessible to train and store large language models (LLMs) on consumer hardware.
20
+
21
+ PEFT is integrated with the Transformers, Diffusers, and Accelerate libraries to provide a faster and easier way to load, train, and use large models for inference.
22
+
23
+ <div class="mt-10">
24
+ <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
25
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="quicktour"
26
+ ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Get started</div>
27
+ <p class="text-gray-700">Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.</p>
28
+ </a>
29
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./task_guides/image_classification_lora"
30
+ ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
31
+ <p class="text-gray-700">Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.</p>
32
+ </a>
33
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/lora"
34
+ ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
35
+ <p class="text-gray-700">Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.</p>
36
+ </a>
37
+ <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/config"
38
+ ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
39
+ <p class="text-gray-700">Technical descriptions of how 🤗 PEFT classes and methods work.</p>
40
+ </a>
41
+ </div>
42
+ </div>
43
+
44
+ <iframe
45
+ src="https://stevhliu-peft-methods.hf.space"
46
+ frameborder="0"
47
+ width="850"
48
+ height="620"
49
+ ></iframe>
peft_md_files/install.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Installation
18
+
19
+ Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 PEFT. 🤗 PEFT is tested on **Python 3.8+**.
20
+
21
+ 🤗 PEFT is available on PyPI, as well as GitHub:
22
+
23
+ ## PyPI
24
+
25
+ To install 🤗 PEFT from PyPI:
26
+
27
+ ```bash
28
+ pip install peft
29
+ ```
30
+
31
+ ## Source
32
+
33
+ New features that haven't been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository:
34
+
35
+ ```bash
36
+ pip install git+https://github.com/huggingface/peft
37
+ ```
38
+
39
+ If you're working on contributing to the library or wish to play with the source code and see live
40
+ results as you run the code, an editable version can be installed from a locally-cloned version of the
41
+ repository:
42
+
43
+ ```bash
44
+ git clone https://github.com/huggingface/peft
45
+ cd peft
46
+ pip install -e .
47
+ ```
peft_md_files/package_reference/adalora.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # AdaLoRA
18
+
19
+ [AdaLoRA](https://hf.co/papers/2303.10512) is a method for optimizing the number of trainable parameters to assign to weight matrices and layers, unlike LoRA, which distributes parameters evenly across all modules. More parameters are budgeted for important weight matrices and layers while less important ones receive fewer parameters.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP. However, common practice fine-tunes all of the parameters in a pre-trained model, which becomes prohibitive when a large number of downstream tasks are present. Therefore, many fine-tuning methods are proposed to learn incremental updates of pre-trained weights in a parameter efficient way, e.g., low-rank increments. These methods often evenly distribute the budget of incremental updates across all pre-trained weight matrices, and overlook the varying importance of different weight parameters. As a consequence, the fine-tuning performance is suboptimal. To bridge this gap, we propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows us to effectively prune the singular values of unimportant updates, which is essentially to reduce their parameter budget but circumvent intensive exact SVD computations. We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA. Results demonstrate that AdaLoRA manifests notable improvement over baselines, especially in the low budget settings. Our code is publicly available at https://github.com/QingruZhang/AdaLoRA*.
24
+
25
+ ## AdaLoraConfig
26
+
27
+ [[autodoc]] tuners.adalora.config.AdaLoraConfig
28
+
29
+ ## AdaLoraModel
30
+
31
+ [[autodoc]] tuners.adalora.model.AdaLoraModel
peft_md_files/package_reference/adapter_utils.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LyCORIS
18
+
19
+ [LyCORIS](https://hf.co/papers/2309.14859) (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The [LoHa](loha) and [LoKr](lokr) methods inherit from the `Lycoris` classes here.
20
+
21
+ ## LycorisConfig
22
+
23
+ [[autodoc]] tuners.lycoris_utils.LycorisConfig
24
+
25
+ ## LycorisLayer
26
+
27
+ [[autodoc]] tuners.lycoris_utils.LycorisLayer
28
+
29
+ ## LycorisTuner
30
+
31
+ [[autodoc]] tuners.lycoris_utils.LycorisTuner
peft_md_files/package_reference/auto_class.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # AutoPeftModels
18
+
19
+ The `AutoPeftModel` classes loads the appropriate PEFT model for the task type by automatically inferring it from the configuration file. They are designed to quickly and easily load a PEFT model in a single line of code without having to worry about which exact model class you need or manually loading a [`PeftConfig`].
20
+
21
+ ## AutoPeftModel
22
+
23
+ [[autodoc]] auto.AutoPeftModel
24
+ - from_pretrained
25
+
26
+ ## AutoPeftModelForCausalLM
27
+
28
+ [[autodoc]] auto.AutoPeftModelForCausalLM
29
+
30
+ ## AutoPeftModelForSeq2SeqLM
31
+
32
+ [[autodoc]] auto.AutoPeftModelForSeq2SeqLM
33
+
34
+ ## AutoPeftModelForSequenceClassification
35
+
36
+ [[autodoc]] auto.AutoPeftModelForSequenceClassification
37
+
38
+ ## AutoPeftModelForTokenClassification
39
+
40
+ [[autodoc]] auto.AutoPeftModelForTokenClassification
41
+
42
+ ## AutoPeftModelForQuestionAnswering
43
+
44
+ [[autodoc]] auto.AutoPeftModelForQuestionAnswering
45
+
46
+ ## AutoPeftModelForFeatureExtraction
47
+
48
+ [[autodoc]] auto.AutoPeftModelForFeatureExtraction
peft_md_files/package_reference/boft.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # BOFT
18
+
19
+ [Orthogonal Butterfly (BOFT)](https://hf.co/papers/2311.06243) is a generic method designed for finetuning foundation models. It improves the paramter efficiency of the finetuning paradigm -- Orthogonal Finetuning (OFT), by taking inspiration from Cooley-Tukey fast Fourier transform, showing favorable results across finetuning different foundation models, including large vision transformers, large language models and text-to-image diffusion models.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream tasks is increasingly important. In this paper, we study a principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream task adaptation. Despite demonstrating good generalizability, OFT still uses a fairly large number of trainable parameters due to the high dimensionality of orthogonal matrices. To address this, we start by examining OFT from an information transmission perspective, and then identify a few key desiderata that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast Fourier transform algorithm enables efficient information transmission, we propose an efficient orthogonal parameterization using butterfly structures. We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a special case, BOFT introduces a generalized orthogonal finetuning framework. Finally, we conduct an extensive empirical study of adapting large vision transformers, large language models, and text-to-image diffusion models to various downstream tasks in vision and language*.
24
+
25
+ ## BOFTConfig
26
+
27
+ [[autodoc]] tuners.boft.config.BOFTConfig
28
+
29
+ ## BOFTModel
30
+
31
+ [[autodoc]] tuners.boft.model.BOFTModel
peft_md_files/package_reference/config.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Configuration
6
+
7
+ [`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
8
+
9
+ ## PeftConfigMixin
10
+
11
+ [[autodoc]] config.PeftConfigMixin
12
+ - all
13
+
14
+ ## PeftConfig
15
+
16
+ [[autodoc]] PeftConfig
17
+ - all
18
+
19
+ ## PromptLearningConfig
20
+
21
+ [[autodoc]] PromptLearningConfig
22
+ - all
peft_md_files/package_reference/fourierft.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # FourierFT: Discrete Fourier Transformation Fine-Tuning
18
+
19
+ [FourierFT](https://huggingface.co/papers/2405.03003) is a parameter-efficient fine-tuning technique that leverages Discrete Fourier Transform to compress the model's tunable weights. This method outperforms LoRA in the GLUE benchmark and common ViT classification tasks using much less parameters.
20
+
21
+ FourierFT currently has the following constraints:
22
+
23
+ - Only `nn.Linear` layers are supported.
24
+ - Quantized layers are not supported.
25
+
26
+ If these constraints don't work for your use case, consider other methods instead.
27
+
28
+ The abstract from the paper is:
29
+
30
+ > Low-rank adaptation (LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices A and B to represent the weight change, i.e., Delta W=BA. Despite LoRA's progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats Delta W as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover Delta W. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA's 33.5M.
31
+
32
+ ## FourierFTConfig
33
+
34
+ [[autodoc]] tuners.fourierft.config.FourierFTConfig
35
+
36
+ ## FourierFTModel
37
+
38
+ [[autodoc]] tuners.fourierft.model.FourierFTModel
peft_md_files/package_reference/helpers.md ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Document Title
6
+
7
+ A collection of helper functions for PEFT.
8
+
9
+ ## Checking if a model is a PEFT model
10
+
11
+ [[autodoc]] helpers.check_if_peft_model
12
+ - all
peft_md_files/package_reference/ia3.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # IA3
18
+
19
+ Infused Adapter by Inhibiting and Amplifying Inner Activations, or [IA3](https://hf.co/papers/2205.05638), is a method that adds three learned vectors to rescale the keys and values of the self-attention and encoder-decoder attention layers, and the intermediate activation of the position-wise feed-forward network.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available*.
24
+
25
+ ## IA3Config
26
+
27
+ [[autodoc]] tuners.ia3.config.IA3Config
28
+
29
+ ## IA3Model
30
+
31
+ [[autodoc]] tuners.ia3.model.IA3Model
peft_md_files/package_reference/layernorm_tuning.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LayerNorm Tuning
18
+
19
+ LayerNorm Tuning ([LN Tuning](https://huggingface.co/papers/2312.11420)) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model.
20
+ The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage.
21
+ However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers.
22
+ In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as `MLP` or `Attention` layers, this can be done by specifying the `target_modules` in the `LNTuningConfig`.
23
+
24
+ The abstract from the paper is:
25
+
26
+ *This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model.*
27
+
28
+ ## LNTuningConfig
29
+
30
+ [[autodoc]] tuners.ln_tuning.config.LNTuningConfig
31
+
32
+ ## LNTuningModel
33
+
34
+ [[autodoc]] tuners.ln_tuning.model.LNTuningModel
peft_md_files/package_reference/llama_adapter.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Llama-Adapter
18
+
19
+ [Llama-Adapter](https://hf.co/papers/2303.16199) is a PEFT method specifically designed for turning Llama into an instruction-following model. The Llama model is frozen and only a set of adaptation prompts prefixed to the input instruction tokens are learned. Since randomly initialized modules inserted into the model can cause the model to lose some of its existing knowledge, Llama-Adapter uses zero-initialized attention with zero gating to progressively add the instructional prompts to the model.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the input text tokens at higher transformer layers. Then, a zero-init attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With efficient training, LLaMA-Adapter generates high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Furthermore, our approach can be simply extended to multi-modal input, e.g., images, for image-conditioned LLaMA, which achieves superior reasoning capacity on ScienceQA. We release our code at https://github.com/ZrrSkywalker/LLaMA-Adapter*.
24
+
25
+ ## AdaptionPromptConfig
26
+
27
+ [[autodoc]] tuners.adaption_prompt.config.AdaptionPromptConfig
28
+
29
+ ## AdaptionPromptModel
30
+
31
+ [[autodoc]] tuners.adaption_prompt.model.AdaptionPromptModel
peft_md_files/package_reference/loha.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LoHa
18
+
19
+ Low-Rank Hadamard Product ([LoHa](https://huggingface.co/papers/2108.06098)), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters*.
24
+
25
+ ## LoHaConfig
26
+
27
+ [[autodoc]] tuners.loha.config.LoHaConfig
28
+
29
+ ## LoHaModel
30
+
31
+ [[autodoc]] tuners.loha.model.LoHaModel
peft_md_files/package_reference/lokr.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LoKr
18
+
19
+ Low-Rank Kronecker Product ([LoKr](https://hf.co/papers/2309.14859)), is a LoRA-variant method that approximates the large weight matrix with two low-rank matrices and combines them with the Kronecker product. LoKr also provides an optional third low-rank matrix to provide better control during fine-tuning.
20
+
21
+ ## LoKrConfig
22
+
23
+ [[autodoc]] tuners.lokr.config.LoKrConfig
24
+
25
+ ## LoKrModel
26
+
27
+ [[autodoc]] tuners.lokr.model.LoKrModel
peft_md_files/package_reference/lora.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LoRA
18
+
19
+ Low-Rank Adaptation ([LoRA](https://huggingface.co/papers/2309.15223)) is a PEFT method that decomposes a large matrix into two smaller low-rank matrices in the attention layers. This drastically reduces the number of parameters that need to be fine-tuned.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.*.
24
+
25
+ ## LoraConfig
26
+
27
+ [[autodoc]] tuners.lora.config.LoraConfig
28
+
29
+ ## LoraModel
30
+
31
+ [[autodoc]] tuners.lora.model.LoraModel
32
+
33
+ ## Utility
34
+
35
+ [[autodoc]] utils.loftq_utils.replace_lora_weights_loftq
peft_md_files/package_reference/merge_utils.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Model merge
18
+
19
+ PEFT provides several internal utilities for [merging LoRA adapters](../developer_guides/model_merging) with the TIES and DARE methods.
20
+
21
+ [[autodoc]] utils.merge_utils.prune
22
+
23
+ [[autodoc]] utils.merge_utils.calculate_majority_sign_mask
24
+
25
+ [[autodoc]] utils.merge_utils.disjoint_merge
26
+
27
+ [[autodoc]] utils.merge_utils.task_arithmetic
28
+
29
+ [[autodoc]] utils.merge_utils.ties
30
+
31
+ [[autodoc]] utils.merge_utils.dare_linear
32
+
33
+ [[autodoc]] utils.merge_utils.dare_ties
peft_md_files/package_reference/multitask_prompt_tuning.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Multitask prompt tuning
18
+
19
+ [Multitask prompt tuning](https://huggingface.co/papers/2303.02861) decomposes the soft prompts of each task into a single learned transferable prompt instead of a separate prompt for each task. The single learned prompt can be adapted for each task by multiplicative low rank updates.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters*.
24
+
25
+ ## MultitaskPromptTuningConfig
26
+
27
+ [[autodoc]] tuners.multitask_prompt_tuning.config.MultitaskPromptTuningConfig
28
+
29
+ ## MultitaskPromptEmbedding
30
+
31
+ [[autodoc]] tuners.multitask_prompt_tuning.model.MultitaskPromptEmbedding
peft_md_files/package_reference/oft.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # OFT
18
+
19
+ [Orthogonal Finetuning (OFT)](https://hf.co/papers/2306.07280) is a method developed for adapting text-to-image diffusion models. It works by reparameterizing the pretrained weight matrices with it's orthogonal matrix to preserve information in the pretrained model. To reduce the number of parameters, OFT introduces a block-diagonal structure in the orthogonal matrix.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Large text-to-image diffusion models have impressive capabilities in generating photorealistic images from text prompts. How to effectively guide or control these powerful models to perform different downstream tasks becomes an important open problem. To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks. Unlike existing methods, OFT can provably preserve hyperspherical energy which characterizes the pairwise neuron relationship on the unit hypersphere. We find that this property is crucial for preserving the semantic generation ability of text-to-image diffusion models. To improve finetuning stability, we further propose Constrained Orthogonal Finetuning (COFT) which imposes an additional radius constraint to the hypersphere. Specifically, we consider two important finetuning text-to-image tasks: subject-driven generation where the goal is to generate subject-specific images given a few images of a subject and a text prompt, and controllable generation where the goal is to enable the model to take in additional control signals. We empirically show that our OFT framework outperforms existing methods in generation quality and convergence speed*.
24
+
25
+ ## OFTConfig
26
+
27
+ [[autodoc]] tuners.oft.config.OFTConfig
28
+
29
+ ## OFTModel
30
+
31
+ [[autodoc]] tuners.oft.model.OFTModel
peft_md_files/package_reference/p_tuning.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # P-tuning
18
+
19
+ [P-tuning](https://hf.co/papers/2103.10385) adds trainable prompt embeddings to the input that is optimized by a prompt encoder to find a better prompt, eliminating the need to manually design prompts. The prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs' performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.*.
24
+
25
+ ## PromptEncoderConfig
26
+
27
+ [[autodoc]] tuners.p_tuning.config.PromptEncoderConfig
28
+
29
+ ## PromptEncoder
30
+
31
+ [[autodoc]] tuners.p_tuning.model.PromptEncoder
peft_md_files/package_reference/peft_model.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Models
6
+
7
+ [`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub.
8
+
9
+ ## PeftModel
10
+
11
+ [[autodoc]] PeftModel
12
+ - all
13
+
14
+ ## PeftModelForSequenceClassification
15
+
16
+ A `PeftModel` for sequence classification tasks.
17
+
18
+ [[autodoc]] PeftModelForSequenceClassification
19
+ - all
20
+
21
+ ## PeftModelForTokenClassification
22
+
23
+ A `PeftModel` for token classification tasks.
24
+
25
+ [[autodoc]] PeftModelForTokenClassification
26
+ - all
27
+
28
+ ## PeftModelForCausalLM
29
+
30
+ A `PeftModel` for causal language modeling.
31
+
32
+ [[autodoc]] PeftModelForCausalLM
33
+ - all
34
+
35
+ ## PeftModelForSeq2SeqLM
36
+
37
+ A `PeftModel` for sequence-to-sequence language modeling.
38
+
39
+ [[autodoc]] PeftModelForSeq2SeqLM
40
+ - all
41
+
42
+ ## PeftModelForQuestionAnswering
43
+
44
+ A `PeftModel` for question answering.
45
+
46
+ [[autodoc]] PeftModelForQuestionAnswering
47
+ - all
48
+
49
+ ## PeftModelForFeatureExtraction
50
+
51
+ A `PeftModel` for getting extracting features/embeddings from transformer models.
52
+
53
+ [[autodoc]] PeftModelForFeatureExtraction
54
+ - all
55
+
56
+ ## PeftMixedModel
57
+
58
+ A `PeftModel` for mixing different adapter types (e.g. LoRA and LoHa).
59
+
60
+ [[autodoc]] PeftMixedModel
61
+ - all
62
+
63
+ ## Utilities
64
+
65
+ [[autodoc]] utils.cast_mixed_precision_params
66
+
67
+ [[autodoc]] get_peft_model
68
+
69
+ [[autodoc]] inject_adapter_in_model
70
+
71
+ [[autodoc]] utils.get_peft_model_state_dict
72
+
73
+ [[autodoc]] utils.prepare_model_for_kbit_training
74
+
75
+ [[autodoc]] get_layer_status
76
+
77
+ [[autodoc]] get_model_status
peft_md_files/package_reference/peft_types.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # PEFT types
18
+
19
+ [`PeftType`] includes the supported adapters in PEFT, and [`TaskType`] includes PEFT-supported tasks.
20
+
21
+ ## PeftType
22
+
23
+ [[autodoc]] utils.peft_types.PeftType
24
+
25
+ ## TaskType
26
+
27
+ [[autodoc]] utils.peft_types.TaskType
peft_md_files/package_reference/poly.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Polytropon
18
+
19
+ [Polytropon](https://hf.co/papers/2202.13914) is a multitask model with a number of different LoRA adapters in it's "inventory". The model learns the correct combination of adapters from the inventory with a routing function to choose the best subset of modules for a specific task. PEFT also supports [Multi-Head Adapter Routing (MHR)](https://hf.co/papers/2211.03831) for Polytropon which builds on and improves the routing function by combining the adapter heads more granularly. The adapter heads are separated into disjoint blocks and a different routing function is learned for each one, allowing for more expressivity.
20
+
21
+ <hfoptions id="paper">
22
+ <hfoption id="Combining Modular Skills in Multitask Learning">
23
+
24
+ The abstract from the paper is:
25
+
26
+ *A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks. In this work, we assume that each task is associated with a subset of latent discrete skills from a (potentially small) inventory. In turn, skills correspond to parameter-efficient (sparse / low-rank) model parameterisations. By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills. To favour non-trivial soft partitions of skills across tasks, we experiment with a series of inductive biases, such as an Indian Buffet Process prior and a two-speed learning rate. We evaluate our latent-skill model on two main settings: 1) multitask reinforcement learning for grounded instruction following on 8 levels of the BabyAI platform; and 2) few-shot adaptation of pre-trained text-to-text generative models on CrossFit, a benchmark comprising 160 NLP tasks. We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to baselines with fully shared, task-specific, or conditionally generated parameters where knowledge is entangled across tasks. In addition, we show how discrete skills help interpretability, as they yield an explicit hierarchy of tasks.*
27
+
28
+ </hfoption>
29
+ <hfoption id="Multi-Head Adapter Routing for Cross-Task Generalization">
30
+
31
+ The abstract from the paper is:
32
+
33
+ *Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing), which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z), we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits higher gradient alignment between tasks than any other method. Since this implies that routing is only crucial during multi-task pre-training, we propose MHR-mu, which discards routing and fine-tunes the average of the pre-trained adapters during few-shot adaptation. This establishes MHR-mu as an effective method for single-adapter fine-tuning.*.
34
+
35
+ </hfoption>
36
+ </hfoptions>
37
+
38
+ ## PolyConfig
39
+
40
+ [[autodoc]] tuners.poly.config.PolyConfig
41
+
42
+ ## PolyModel
43
+
44
+ [[autodoc]] tuners.poly.model.PolyModel
peft_md_files/package_reference/prefix_tuning.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Prefix tuning
18
+
19
+ [Prefix tuning](https://hf.co/papers/2101.00190) prefixes a series of task-specific vectors to the input sequence that can be learned while keeping the pretrained model frozen. The prefix parameters are inserted in all of the model layers.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training*.
24
+
25
+ ## PrefixTuningConfig
26
+
27
+ [[autodoc]] tuners.prefix_tuning.config.PrefixTuningConfig
28
+
29
+ ## PrefixEncoder
30
+
31
+ [[autodoc]] tuners.prefix_tuning.model.PrefixEncoder
peft_md_files/package_reference/prompt_tuning.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Prompt tuning
18
+
19
+ [Prompt tuning](https://hf.co/papers/2104.08691) adds task-specific prompts to the input, and these prompt parameters are updated independently of the pretrained model parameters which are frozen.
20
+
21
+ The abstract from the paper is:
22
+
23
+ *In this work, we explore "prompt tuning", a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's "few-shot" learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning*.
24
+
25
+ ## PromptTuningConfig
26
+
27
+ [[autodoc]] tuners.prompt_tuning.config.PromptTuningConfig
28
+
29
+ ## PromptEmbedding
30
+
31
+ [[autodoc]] tuners.prompt_tuning.model.PromptEmbedding
peft_md_files/package_reference/tuners.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Tuners
18
+
19
+ A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. [`BaseTuner`] base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. [`BaseTunerLayer`] is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters.
20
+
21
+ ## BaseTuner
22
+
23
+ [[autodoc]] tuners.tuners_utils.BaseTuner
24
+
25
+ ## BaseTunerLayer
26
+
27
+ [[autodoc]] tuners.tuners_utils.BaseTunerLayer
peft_md_files/package_reference/vera.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # VeRA: Vector-based Random Matrix Adaptation
18
+
19
+ [VeRA](https://huggingface.co/papers/2310.11454) is a parameter-efficient fine-tuning technique that is similar to LoRA but requires even fewer extra parameters while promising similar or even better performance. As such, it is particularly useful when the parameter budget is very limited, e.g. when scaling to very large models. The reduction of the count of trainable parameters is achieved by sharing the same low-rank matrices across all layers, and only training two additional vectors per layer.
20
+
21
+ When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).
22
+
23
+ To handle different shapes of adapted layers, VeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.
24
+
25
+ VeRA currently has the following constraints:
26
+
27
+ - Only `nn.Linear` layers are supported.
28
+ - Quantized layers are not supported.
29
+
30
+ If these constraints don't work for your use case, use LoRA instead.
31
+
32
+ The abstract from the paper is:
33
+
34
+ > Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models.
35
+
36
+ ## VeRAConfig
37
+
38
+ [[autodoc]] tuners.vera.config.VeraConfig
39
+
40
+ ## VeRAModel
41
+
42
+ [[autodoc]] tuners.vera.model.VeraModel
peft_md_files/quicktour.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Quicktour
18
+
19
+ PEFT offers parameter-efficient methods for finetuning large pretrained models. The traditional paradigm is to finetune all of a model's parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters.
20
+
21
+ This quicktour will show you PEFT's main features and how you can train or run inference on large models that would typically be inaccessible on consumer devices.
22
+
23
+ ## Train
24
+
25
+ Each PEFT method is defined by a [`PeftConfig`] class that stores all the important parameters for building a [`PeftModel`]. For example, to train with LoRA, load and create a [`LoraConfig`] class and specify the following parameters:
26
+
27
+ - `task_type`: the task to train for (sequence-to-sequence language modeling in this case)
28
+ - `inference_mode`: whether you're using the model for inference or not
29
+ - `r`: the dimension of the low-rank matrices
30
+ - `lora_alpha`: the scaling factor for the low-rank matrices
31
+ - `lora_dropout`: the dropout probability of the LoRA layers
32
+
33
+ ```python
34
+ from peft import LoraConfig, TaskType
35
+
36
+ peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
37
+ ```
38
+
39
+ <Tip>
40
+
41
+ See the [`LoraConfig`] reference for more details about other parameters you can adjust, such as the modules to target or the bias type.
42
+
43
+ </Tip>
44
+
45
+ Once the [`LoraConfig`] is setup, create a [`PeftModel`] with the [`get_peft_model`] function. It takes a base model - which you can load from the Transformers library - and the [`LoraConfig`] containing the parameters for how to configure a model for training with LoRA.
46
+
47
+ Load the base model you want to finetune.
48
+
49
+ ```python
50
+ from transformers import AutoModelForSeq2SeqLM
51
+
52
+ model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
53
+ ```
54
+
55
+ Wrap the base model and `peft_config` with the [`get_peft_model`] function to create a [`PeftModel`]. To get a sense of the number of trainable parameters in your model, use the [`print_trainable_parameters`] method.
56
+
57
+ ```python
58
+ from peft import get_peft_model
59
+
60
+ model = get_peft_model(model, peft_config)
61
+ model.print_trainable_parameters()
62
+ "output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282"
63
+ ```
64
+
65
+ Out of [bigscience/mt0-large's](https://huggingface.co/bigscience/mt0-large) 1.2B parameters, you're only training 0.19% of them!
66
+
67
+ That is it 🎉! Now you can train the model with the Transformers [`~transformers.Trainer`], Accelerate, or any custom PyTorch training loop.
68
+
69
+ For example, to train with the [`~transformers.Trainer`] class, setup a [`~transformers.TrainingArguments`] class with some training hyperparameters.
70
+
71
+ ```py
72
+ training_args = TrainingArguments(
73
+ output_dir="your-name/bigscience/mt0-large-lora",
74
+ learning_rate=1e-3,
75
+ per_device_train_batch_size=32,
76
+ per_device_eval_batch_size=32,
77
+ num_train_epochs=2,
78
+ weight_decay=0.01,
79
+ evaluation_strategy="epoch",
80
+ save_strategy="epoch",
81
+ load_best_model_at_end=True,
82
+ )
83
+ ```
84
+
85
+ Pass the model, training arguments, dataset, tokenizer, and any other necessary component to the [`~transformers.Trainer`], and call [`~transformers.Trainer.train`] to start training.
86
+
87
+ ```py
88
+ trainer = Trainer(
89
+ model=model,
90
+ args=training_args,
91
+ train_dataset=tokenized_datasets["train"],
92
+ eval_dataset=tokenized_datasets["test"],
93
+ tokenizer=tokenizer,
94
+ data_collator=data_collator,
95
+ compute_metrics=compute_metrics,
96
+ )
97
+
98
+ trainer.train()
99
+ ```
100
+
101
+ ### Save model
102
+
103
+ After your model is finished training, you can save your model to a directory using the [`~transformers.PreTrainedModel.save_pretrained`] function.
104
+
105
+ ```py
106
+ model.save_pretrained("output_dir")
107
+ ```
108
+
109
+ You can also save your model to the Hub (make sure you're logged in to your Hugging Face account first) with the [`~transformers.PreTrainedModel.push_to_hub`] function.
110
+
111
+ ```python
112
+ from huggingface_hub import notebook_login
113
+
114
+ notebook_login()
115
+ model.push_to_hub("your-name/bigscience/mt0-large-lora")
116
+ ```
117
+
118
+ Both methods only save the extra PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this [facebook/opt-350m](https://huggingface.co/ybelkada/opt-350m-lora) model trained with LoRA only contains two files: `adapter_config.json` and `adapter_model.safetensors`. The `adapter_model.safetensors` file is just 6.3MB!
119
+
120
+ <div class="flex flex-col justify-center">
121
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
122
+ <figcaption class="text-center">The adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.</figcaption>
123
+ </div>
124
+
125
+ ## Inference
126
+
127
+ <Tip>
128
+
129
+ Take a look at the [AutoPeftModel](package_reference/auto_class) API reference for a complete list of available `AutoPeftModel` classes.
130
+
131
+ </Tip>
132
+
133
+ Easily load any PEFT-trained model for inference with the [`AutoPeftModel`] class and the [`~transformers.PreTrainedModel.from_pretrained`] method:
134
+
135
+ ```py
136
+ from peft import AutoPeftModelForCausalLM
137
+ from transformers import AutoTokenizer
138
+ import torch
139
+
140
+ model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
141
+ tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
142
+
143
+ model = model.to("cuda")
144
+ model.eval()
145
+ inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")
146
+
147
+ outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=50)
148
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])
149
+
150
+ "Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla."
151
+ ```
152
+
153
+ For other tasks that aren't explicitly supported with an `AutoPeftModelFor` class - such as automatic speech recognition - you can still use the base [`AutoPeftModel`] class to load a model for the task.
154
+
155
+ ```py
156
+ from peft import AutoPeftModel
157
+
158
+ model = AutoPeftModel.from_pretrained("smangrul/openai-whisper-large-v2-LORA-colab")
159
+ ```
160
+
161
+ ## Next steps
162
+
163
+ Now that you've seen how to train a model with one of the PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in the quicktour:
164
+
165
+ 1. prepare a [`PeftConfig`] for a PEFT method
166
+ 2. use the [`get_peft_model`] method to create a [`PeftModel`] from the configuration and base model
167
+
168
+ Then you can train it however you like! To load a PEFT model for inference, you can use the [`AutoPeftModel`] class.
169
+
170
+ Feel free to also take a look at the task guides if you're interested in training a model with another PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, token classification, and more.
peft_md_files/task_guides/ia3.md ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # IA3
18
+
19
+ [IA3](../conceptual_guides/ia3) multiplies the model's activations (the keys and values in the self-attention and encoder-decoder attention blocks, and the intermediate activation of the position-wise feedforward network) by three learned vectors. This PEFT method introduces an even smaller number of trainable parameters than LoRA which introduces weight matrices instead of vectors. The original model's parameters are kept frozen and only these vectors are updated. As a result, it is faster, cheaper and more efficient to finetune for a new downstream task.
20
+
21
+ This guide will show you how to train a sequence-to-sequence model with IA3 to *generate a sentiment* given some financial news.
22
+
23
+ <Tip>
24
+
25
+ Some familiarity with the general process of training a sequence-to-sequence would be really helpful and allow you to focus on how to apply IA3. If you’re new, we recommend taking a look at the [Translation](https://huggingface.co/docs/transformers/tasks/translation) and [Summarization](https://huggingface.co/docs/transformers/tasks/summarization) guides first from the Transformers documentation. When you’re ready, come back and see how easy it is to drop PEFT in to your training!
26
+
27
+ </Tip>
28
+
29
+ ## Dataset
30
+
31
+ You'll use the sentences_allagree subset of the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. This subset contains financial news with 100% annotator agreement on the sentiment label. Take a look at the [dataset viewer](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree) for a better idea of the data and sentences you'll be working with.
32
+
33
+ Load the dataset with the [`~datasets.load_dataset`] function. This subset of the dataset only contains a train split, so use the [`~datasets.train_test_split`] function to create a train and validation split. Create a new `text_label` column so it is easier to understand what the `label` values `0`, `1`, and `2` mean.
34
+
35
+ ```py
36
+ from datasets import load_dataset
37
+
38
+ ds = load_dataset("financial_phrasebank", "sentences_allagree")
39
+ ds = ds["train"].train_test_split(test_size=0.1)
40
+ ds["validation"] = ds["test"]
41
+ del ds["test"]
42
+
43
+ classes = ds["train"].features["label"].names
44
+ ds = ds.map(
45
+ lambda x: {"text_label": [classes[label] for label in x["label"]]},
46
+ batched=True,
47
+ num_proc=1,
48
+ )
49
+
50
+ ds["train"][0]
51
+ {'sentence': 'It will be operated by Nokia , and supported by its Nokia NetAct network and service management system .',
52
+ 'label': 1,
53
+ 'text_label': 'neutral'}
54
+ ```
55
+
56
+ Load a tokenizer and create a preprocessing function that:
57
+
58
+ 1. tokenizes the inputs, pads and truncates the sequence to the `max_length`
59
+ 2. apply the same tokenizer to the labels but with a shorter `max_length` that corresponds to the label
60
+ 3. mask the padding tokens
61
+
62
+ ```py
63
+ from transformers import AutoTokenizer
64
+
65
+ text_column = "sentence"
66
+ label_column = "text_label"
67
+ max_length = 128
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
70
+
71
+ def preprocess_function(examples):
72
+ inputs = examples[text_column]
73
+ targets = examples[label_column]
74
+ model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
75
+ labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt")
76
+ labels = labels["input_ids"]
77
+ labels[labels == tokenizer.pad_token_id] = -100
78
+ model_inputs["labels"] = labels
79
+ return model_inputs
80
+ ```
81
+
82
+ Use the [`~datasets.Dataset.map`] function to apply the preprocessing function to the entire dataset.
83
+
84
+ ```py
85
+ processed_ds = ds.map(
86
+ preprocess_function,
87
+ batched=True,
88
+ num_proc=1,
89
+ remove_columns=ds["train"].column_names,
90
+ load_from_cache_file=False,
91
+ desc="Running tokenizer on dataset",
92
+ )
93
+ ```
94
+
95
+ Create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), and set `pin_memory=True` to speed up data transfer to the GPU during training if your dataset samples are on a CPU.
96
+
97
+ ```py
98
+ from torch.utils.data import DataLoader
99
+ from transformers import default_data_collator
100
+
101
+ train_ds = processed_ds["train"]
102
+ eval_ds = processed_ds["validation"]
103
+
104
+ batch_size = 8
105
+
106
+ train_dataloader = DataLoader(
107
+ train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
108
+ )
109
+ eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
110
+ ```
111
+
112
+ ## Model
113
+
114
+ Now you can load a pretrained model to use as the base model for IA3. This guide uses the [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) model, but you can use any sequence-to-sequence model you like.
115
+
116
+ ```py
117
+ from transformers import AutoModelForSeq2SeqLM
118
+
119
+ model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
120
+ ```
121
+
122
+ ### PEFT configuration and model
123
+
124
+ All PEFT methods need a configuration that contains and specifies all the parameters for how the PEFT method should be applied. Create an [`IA3Config`] with the task type and set the inference mode to `False`. You can find additional parameters for this configuration in the [API reference](../package_reference/ia3#ia3config).
125
+
126
+ <Tip>
127
+
128
+ Call the [`~PeftModel.print_trainable_parameters`] method to compare the number of trainable parameters of [`PeftModel`] versus the number of parameters in the base model!
129
+
130
+ </Tip>
131
+
132
+ Once the configuration is setup, pass it to the [`get_peft_model`] function along with the base model to create a trainable [`PeftModel`].
133
+
134
+ ```py
135
+ from peft import IA3Config, get_peft_model
136
+
137
+ peft_config = IA3Config(task_type="SEQ_2_SEQ_LM")
138
+ model = get_peft_model(model, peft_config)
139
+ model.print_trainable_parameters()
140
+ "trainable params: 282,624 || all params: 1,229,863,936 || trainable%: 0.022980103060766553"
141
+ ```
142
+
143
+ ### Training
144
+
145
+ Set up an optimizer and learning rate scheduler.
146
+
147
+ ```py
148
+ import torch
149
+ from transformers import get_linear_schedule_with_warmup
150
+
151
+ lr = 8e-3
152
+ num_epochs = 3
153
+
154
+ optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
155
+ lr_scheduler = get_linear_schedule_with_warmup(
156
+ optimizer=optimizer,
157
+ num_warmup_steps=0,
158
+ num_training_steps=(len(train_dataloader) * num_epochs),
159
+ )
160
+ ```
161
+
162
+ Move the model to the GPU and create a training loop that reports the loss and perplexity for each epoch.
163
+
164
+ ```py
165
+ from tqdm import tqdm
166
+
167
+ device = "cuda"
168
+ model = model.to(device)
169
+
170
+ for epoch in range(num_epochs):
171
+ model.train()
172
+ total_loss = 0
173
+ for step, batch in enumerate(tqdm(train_dataloader)):
174
+ batch = {k: v.to(device) for k, v in batch.items()}
175
+ outputs = model(**batch)
176
+ loss = outputs.loss
177
+ total_loss += loss.detach().float()
178
+ loss.backward()
179
+ optimizer.step()
180
+ lr_scheduler.step()
181
+ optimizer.zero_grad()
182
+
183
+ model.eval()
184
+ eval_loss = 0
185
+ eval_preds = []
186
+ for step, batch in enumerate(tqdm(eval_dataloader)):
187
+ batch = {k: v.to(device) for k, v in batch.items()}
188
+ with torch.no_grad():
189
+ outputs = model(**batch)
190
+ loss = outputs.loss
191
+ eval_loss += loss.detach().float()
192
+ eval_preds.extend(
193
+ tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
194
+ )
195
+
196
+ eval_epoch_loss = eval_loss / len(eval_dataloader)
197
+ eval_ppl = torch.exp(eval_epoch_loss)
198
+ train_epoch_loss = total_loss / len(train_dataloader)
199
+ train_ppl = torch.exp(train_epoch_loss)
200
+ print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
201
+ ```
202
+
203
+ ## Share your model
204
+
205
+ After training is complete, you can upload your model to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method. You'll need to login to your Hugging Face account first and enter your token when prompted.
206
+
207
+ ```py
208
+ from huggingface_hub import notebook_login
209
+
210
+ account = <your-hf-account-name>
211
+ peft_model_id = f"{account}/mt0-large-ia3"
212
+ model.push_to_hub(peft_model_id)
213
+ ```
214
+
215
+ ## Inference
216
+
217
+ To load the model for inference, use the [`~AutoPeftModelForSeq2SeqLM.from_pretrained`] method. Let's also load a sentence of financial news from the dataset to generate a sentiment for.
218
+
219
+ ```py
220
+ from peft import AutoPeftModelForSeq2SeqLM
221
+
222
+ model = AutoPeftModelForSeq2SeqLM.from_pretrained("<your-hf-account-name>/mt0-large-ia3").to("cuda")
223
+ tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
224
+
225
+ i = 15
226
+ inputs = tokenizer(ds["validation"][text_column][i], return_tensors="pt")
227
+ print(ds["validation"][text_column][i])
228
+ "The robust growth was the result of the inclusion of clothing chain Lindex in the Group in December 2007 ."
229
+ ```
230
+
231
+ Call the [`~transformers.GenerationMixin.generate`] method to generate the predicted sentiment label.
232
+
233
+ ```py
234
+ with torch.no_grad():
235
+ inputs = {k: v.to(device) for k, v in inputs.items()}
236
+ outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
237
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
238
+ ['positive']
239
+ ```