omarsol commited on
Commit
2e6d63c
1 Parent(s): 9f57253

89b855676556ab30d218ae65654c9ca26a2398e9db20b5e870ea8e92818bf4e9

Browse files
chroma-db-peft/a29fd59d-e7d4-4aea-b025-299831602c96/data_level0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b8d4b3825a7c7a773e22fa3eeef0e7d15a695f5c4183aeff5beb07741a68679
3
+ size 12428000
chroma-db-peft/a29fd59d-e7d4-4aea-b025-299831602c96/header.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8a3ec48846fc6fdfaef19f5ed2508f0bf3da4a3c93b0f6b3dd21f0a22ec1026
3
+ size 100
chroma-db-peft/a29fd59d-e7d4-4aea-b025-299831602c96/length.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de70266e9ddc6f6bfa65d0853575f16adc9f17a2188847c9f196291022e1ab22
3
+ size 4000
chroma-db-peft/a29fd59d-e7d4-4aea-b025-299831602c96/link_lists.bin ADDED
File without changes
chroma-db-peft/chroma.sqlite3 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b0321f854c294da9564e7e90ccb11b3190bd3d900d4606250fb1ccbaabd83be
3
- size 5226496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a19d72fad22bd94dac70cdbe849c6b9080e4fc8be7dbecccb5cdf7e13e3e942
3
+ size 5292032
chroma-db-peft/document_dict_peft.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69ea3f661fbc9d85496d6cf77a09cb545998b1f0ebe4a8fb91865444dbfcffae
3
- size 260392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abd6135e2bdc35db2d3349f656fc04b7d523201499275e0baf291f0fa4e42094
3
+ size 261248
peft_md_files/accelerate/deepspeed.md ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # DeepSpeed
6
+
7
+ [DeepSpeed](https://www.deepspeed.ai/) is a library designed for speed and scale for distributed training of large models with billions of parameters. At its core is the Zero Redundancy Optimizer (ZeRO) that shards optimizer states (ZeRO-1), gradients (ZeRO-2), and parameters (ZeRO-3) across data parallel processes. This drastically reduces memory usage, allowing you to scale your training to billion parameter models. To unlock even more memory efficiency, ZeRO-Offload reduces GPU compute and memory by leveraging CPU resources during optimization.
8
+
9
+ Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
10
+
11
+ ## Compatibility with `bitsandbytes` quantization + LoRA
12
+
13
+ Below is a table that summarizes the compatibility between PEFT's LoRA, [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library and DeepSpeed Zero stages with respect to fine-tuning. DeepSpeed Zero-1 and 2 will have no effect at inference as stage 1 shards the optimizer states and stage 2 shards the optimizer states and gradients:
14
+
15
+ | DeepSpeed stage | Is compatible? |
16
+ |---|---|
17
+ | Zero-1 | 🟢 |
18
+ | Zero-2 | 🟢 |
19
+ | Zero-3 | 🟢 |
20
+
21
+ For DeepSpeed Stage 3 + QLoRA, please refer to the section [Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs](#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus) below.
22
+
23
+ For confirming these observations, we ran the SFT (Supervised Fine-tuning) [offical example scripts](https://github.com/huggingface/trl/tree/main/examples) of the [Transformers Reinforcement Learning (TRL) library](https://github.com/huggingface/trl) using QLoRA + PEFT and the accelerate configs available [here](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs). We ran these experiments on a 2x NVIDIA T4 GPU.
24
+
25
+ # Use PEFT and DeepSpeed with ZeRO3 for finetuning large models on multiple devices and multiple nodes
26
+
27
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and ZeRO-3 on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
28
+
29
+ ## Configuration
30
+
31
+ Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
32
+
33
+ The configuration file is used to set the default options when you launch the training script.
34
+
35
+ ```bash
36
+ accelerate config --config_file deepspeed_config.yaml
37
+ ```
38
+
39
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 so make sure you pick those options.
40
+
41
+ ```bash
42
+ `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
43
+ `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them. Pass the same value as you would pass via cmd argument else you will encounter mismatch error.
44
+ `gradient_clipping`: Enable gradient clipping with value. Don't set this as you will be passing it via cmd arguments.
45
+ `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2. Set this as `none` as don't want to enable offloading.
46
+ `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3. Set this as `none` as don't want to enable offloading.
47
+ `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3. Set this to `True`.
48
+ `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3. Set this to `True`.
49
+ `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training. Set this to `True`.
50
+ ```
51
+
52
+ Once this is done, the corresponding config should look like below and you can find it in config folder at [deepspeed_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config.yaml):
53
+
54
+ ```yml
55
+ compute_environment: LOCAL_MACHINE
56
+ debug: false
57
+ deepspeed_config:
58
+ deepspeed_multinode_launcher: standard
59
+ gradient_accumulation_steps: 4
60
+ offload_optimizer_device: none
61
+ offload_param_device: none
62
+ zero3_init_flag: true
63
+ zero3_save_16bit_model: true
64
+ zero_stage: 3
65
+ distributed_type: DEEPSPEED
66
+ downcast_bf16: 'no'
67
+ machine_rank: 0
68
+ main_training_function: main
69
+ mixed_precision: bf16
70
+ num_machines: 1
71
+ num_processes: 8
72
+ rdzv_backend: static
73
+ same_network: true
74
+ tpu_env: []
75
+ tpu_use_cluster: false
76
+ tpu_use_sudo: false
77
+ use_cpu: false
78
+ ```
79
+
80
+ ## Launch command
81
+
82
+ The launch command is available at [run_peft_deepspeed.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh) and it is also shown below:
83
+ ```bash
84
+ accelerate launch --config_file "configs/deepspeed_config.yaml" train.py \
85
+ --seed 100 \
86
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
87
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
88
+ --chat_template_format "chatml" \
89
+ --add_special_tokens False \
90
+ --append_concat_token False \
91
+ --splits "train,test" \
92
+ --max_seq_len 2048 \
93
+ --num_train_epochs 1 \
94
+ --logging_steps 5 \
95
+ --log_level "info" \
96
+ --logging_strategy "steps" \
97
+ --evaluation_strategy "epoch" \
98
+ --save_strategy "epoch" \
99
+ --push_to_hub \
100
+ --hub_private_repo True \
101
+ --hub_strategy "every_save" \
102
+ --bf16 True \
103
+ --packing True \
104
+ --learning_rate 1e-4 \
105
+ --lr_scheduler_type "cosine" \
106
+ --weight_decay 1e-4 \
107
+ --warmup_ratio 0.0 \
108
+ --max_grad_norm 1.0 \
109
+ --output_dir "llama-sft-lora-deepspeed" \
110
+ --per_device_train_batch_size 8 \
111
+ --per_device_eval_batch_size 8 \
112
+ --gradient_accumulation_steps 4 \
113
+ --gradient_checkpointing True \
114
+ --use_reentrant False \
115
+ --dataset_text_field "content" \
116
+ --use_flash_attn True \
117
+ --use_peft_lora True \
118
+ --lora_r 8 \
119
+ --lora_alpha 16 \
120
+ --lora_dropout 0.1 \
121
+ --lora_target_modules "all-linear" \
122
+ --use_4bit_quantization False
123
+ ```
124
+
125
+ Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the deepspeed config file and finetuning 70B Llama model on a subset of the ultrachat dataset.
126
+
127
+ ## The important parts
128
+
129
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
130
+
131
+ The first thing to know is that the script uses DeepSpeed for distributed training as the DeepSpeed config has been passed. The `SFTTrainer` class handles all the heavy lifting of creating the PEFT model using the peft config that is passed. After that, when you call `trainer.train()`, `SFTTrainer` internally uses 🤗 Accelerate to prepare the model, optimizer and trainer using the DeepSpeed config to create DeepSpeed engine which is then trained. The main code snippet is below:
132
+
133
+ ```python
134
+ # trainer
135
+ trainer = SFTTrainer(
136
+ model=model,
137
+ tokenizer=tokenizer,
138
+ args=training_args,
139
+ train_dataset=train_dataset,
140
+ eval_dataset=eval_dataset,
141
+ peft_config=peft_config,
142
+ packing=data_args.packing,
143
+ dataset_kwargs={
144
+ "append_concat_token": data_args.append_concat_token,
145
+ "add_special_tokens": data_args.add_special_tokens,
146
+ },
147
+ dataset_text_field=data_args.dataset_text_field,
148
+ max_seq_length=data_args.max_seq_length,
149
+ )
150
+ trainer.accelerator.print(f"{trainer.model}")
151
+
152
+ # train
153
+ checkpoint = None
154
+ if training_args.resume_from_checkpoint is not None:
155
+ checkpoint = training_args.resume_from_checkpoint
156
+ trainer.train(resume_from_checkpoint=checkpoint)
157
+
158
+ # saving final model
159
+ trainer.save_model()
160
+ ```
161
+
162
+ ## Memory usage
163
+
164
+ In the above example, the memory consumed per GPU is 64 GB (80%) as seen in the screenshot below:
165
+
166
+ <div class="flex justify-center">
167
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_deepspeed_mem_usage.png"/>
168
+ </div>
169
+ <small>GPU memory usage for the training run</small>
170
+
171
+ ## More resources
172
+ You can also refer this blog post [Falcon 180B Finetuning using 🤗 PEFT and DeepSpeed](https://medium.com/@sourabmangrulkar/falcon-180b-finetuning-using-peft-and-deepspeed-b92643091d99) on how to finetune 180B Falcon model on 16 A100 GPUs on 2 machines.
173
+
174
+
175
+ # Use PEFT QLoRA and DeepSpeed with ZeRO3 for finetuning large models on multiple GPUs
176
+
177
+ In this section, we will look at how to use QLoRA and DeepSpeed Stage-3 for finetuning 70B llama model on 2X40GB GPUs.
178
+ For this, we first need `bitsandbytes>=0.43.0`, `accelerate>=0.28.0`, `transformers>4.38.2`, `trl>0.7.11` and `peft>0.9.0`. We need to set `zero3_init_flag` to true when using Accelerate config. Below is the config which can be found at [deepspeed_config_z3_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/deepspeed_config_z3_qlora.yaml):
179
+
180
+ ```yml
181
+ compute_environment: LOCAL_MACHINE
182
+ debug: false
183
+ deepspeed_config:
184
+ deepspeed_multinode_launcher: standard
185
+ offload_optimizer_device: none
186
+ offload_param_device: none
187
+ zero3_init_flag: true
188
+ zero3_save_16bit_model: true
189
+ zero_stage: 3
190
+ distributed_type: DEEPSPEED
191
+ downcast_bf16: 'no'
192
+ machine_rank: 0
193
+ main_training_function: main
194
+ mixed_precision: bf16
195
+ num_machines: 1
196
+ num_processes: 2
197
+ rdzv_backend: static
198
+ same_network: true
199
+ tpu_env: []
200
+ tpu_use_cluster: false
201
+ tpu_use_sudo: false
202
+ use_cpu: false
203
+ ```
204
+
205
+ Launch command is given below which is available at [run_peft_qlora_deepspeed_stage3.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_deepspeed.sh):
206
+ ```
207
+ accelerate launch --config_file "configs/deepspeed_config_z3_qlora.yaml" train.py \
208
+ --seed 100 \
209
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
210
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
211
+ --chat_template_format "chatml" \
212
+ --add_special_tokens False \
213
+ --append_concat_token False \
214
+ --splits "train,test" \
215
+ --max_seq_len 2048 \
216
+ --num_train_epochs 1 \
217
+ --logging_steps 5 \
218
+ --log_level "info" \
219
+ --logging_strategy "steps" \
220
+ --evaluation_strategy "epoch" \
221
+ --save_strategy "epoch" \
222
+ --push_to_hub \
223
+ --hub_private_repo True \
224
+ --hub_strategy "every_save" \
225
+ --bf16 True \
226
+ --packing True \
227
+ --learning_rate 1e-4 \
228
+ --lr_scheduler_type "cosine" \
229
+ --weight_decay 1e-4 \
230
+ --warmup_ratio 0.0 \
231
+ --max_grad_norm 1.0 \
232
+ --output_dir "llama-sft-qlora-dsz3" \
233
+ --per_device_train_batch_size 2 \
234
+ --per_device_eval_batch_size 2 \
235
+ --gradient_accumulation_steps 2 \
236
+ --gradient_checkpointing True \
237
+ --use_reentrant True \
238
+ --dataset_text_field "content" \
239
+ --use_flash_attn True \
240
+ --use_peft_lora True \
241
+ --lora_r 8 \
242
+ --lora_alpha 16 \
243
+ --lora_dropout 0.1 \
244
+ --lora_target_modules "all-linear" \
245
+ --use_4bit_quantization True \
246
+ --use_nested_quant True \
247
+ --bnb_4bit_compute_dtype "bfloat16" \
248
+ --bnb_4bit_quant_storage_dtype "bfloat16"
249
+ ```
250
+
251
+ Notice the new argument being passed `bnb_4bit_quant_storage_dtype` which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **32/4 = 8** 4-bit params are packed together post quantization.
252
+
253
+ In terms of training code, the important code changes are:
254
+
255
+ ```diff
256
+ ...
257
+
258
+ bnb_config = BitsAndBytesConfig(
259
+ load_in_4bit=args.use_4bit_quantization,
260
+ bnb_4bit_quant_type=args.bnb_4bit_quant_type,
261
+ bnb_4bit_compute_dtype=compute_dtype,
262
+ bnb_4bit_use_double_quant=args.use_nested_quant,
263
+ + bnb_4bit_quant_storage=quant_storage_dtype,
264
+ )
265
+
266
+ ...
267
+
268
+ model = AutoModelForCausalLM.from_pretrained(
269
+ args.model_name_or_path,
270
+ quantization_config=bnb_config,
271
+ trust_remote_code=True,
272
+ attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
273
+ + torch_dtype=quant_storage_dtype or torch.float32,
274
+ )
275
+ ```
276
+
277
+ Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
278
+
279
+ ## Memory usage
280
+
281
+ In the above example, the memory consumed per GPU is **36.6 GB**. Therefore, what took 8X80GB GPUs with DeepSpeed Stage 3+LoRA and a couple of 80GB GPUs with DDP+QLoRA now requires 2X40GB GPUs. This makes finetuning of large models more accessible.
282
+
283
+ # Use PEFT and DeepSpeed with ZeRO3 and CPU Offloading for finetuning large models on a single GPU
284
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You'll configure the script to train a large model for conditional generation with ZeRO-3 and CPU Offload.
285
+
286
+ <Tip>
287
+
288
+ 💡 To help you get started, check out our example training scripts for [causal language modeling](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_accelerate_ds_zero3_offload.py) and [conditional generation](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py). You can adapt these scripts for your own applications or even use them out of the box if your task is similar to the one in the scripts.
289
+
290
+ </Tip>
291
+
292
+ ## Configuration
293
+
294
+ Start by running the following command to [create a DeepSpeed configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
295
+
296
+ The configuration file is used to set the default options when you launch the training script.
297
+
298
+ ```bash
299
+ accelerate config --config_file ds_zero3_cpu.yaml
300
+ ```
301
+
302
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll use ZeRO-3 along with CPU-Offload so make sure you pick those options.
303
+
304
+ ```bash
305
+ `zero_stage`: [0] Disabled, [1] optimizer state partitioning, [2] optimizer+gradient state partitioning and [3] optimizer+gradient+parameter partitioning
306
+ `gradient_accumulation_steps`: Number of training steps to accumulate gradients before averaging and applying them.
307
+ `gradient_clipping`: Enable gradient clipping with value.
308
+ `offload_optimizer_device`: [none] Disable optimizer offloading, [cpu] offload optimizer to CPU, [nvme] offload optimizer to NVMe SSD. Only applicable with ZeRO >= Stage-2.
309
+ `offload_param_device`: [none] Disable parameter offloading, [cpu] offload parameters to CPU, [nvme] offload parameters to NVMe SSD. Only applicable with ZeRO Stage-3.
310
+ `zero3_init_flag`: Decides whether to enable `deepspeed.zero.Init` for constructing massive models. Only applicable with ZeRO Stage-3.
311
+ `zero3_save_16bit_model`: Decides whether to save 16-bit model weights when using ZeRO Stage-3.
312
+ `mixed_precision`: `no` for FP32 training, `fp16` for FP16 mixed-precision training and `bf16` for BF16 mixed-precision training.
313
+ ```
314
+
315
+ An example [configuration file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/accelerate_ds_zero3_cpu_offload_config.yaml) might look like the following. The most important thing to notice is that `zero_stage` is set to `3`, and `offload_optimizer_device` and `offload_param_device` are set to the `cpu`.
316
+
317
+ ```yml
318
+ compute_environment: LOCAL_MACHINE
319
+ deepspeed_config:
320
+ gradient_accumulation_steps: 1
321
+ gradient_clipping: 1.0
322
+ offload_optimizer_device: cpu
323
+ offload_param_device: cpu
324
+ zero3_init_flag: true
325
+ zero3_save_16bit_model: true
326
+ zero_stage: 3
327
+ distributed_type: DEEPSPEED
328
+ downcast_bf16: 'no'
329
+ dynamo_backend: 'NO'
330
+ fsdp_config: {}
331
+ machine_rank: 0
332
+ main_training_function: main
333
+ megatron_lm_config: {}
334
+ mixed_precision: 'no'
335
+ num_machines: 1
336
+ num_processes: 1
337
+ rdzv_backend: static
338
+ same_network: true
339
+ use_cpu: false
340
+ ```
341
+
342
+ ## The important parts
343
+
344
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
345
+
346
+ Within the [`main`](https://github.com/huggingface/peft/blob/2822398fbe896f25d4dac5e468624dc5fd65a51b/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L103) function, the script creates an [`~accelerate.Accelerator`] class to initialize all the necessary requirements for distributed training.
347
+
348
+ <Tip>
349
+
350
+ 💡 Feel free to change the model and dataset inside the `main` function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
351
+
352
+ </Tip>
353
+
354
+ The script also creates a configuration for the 🤗 PEFT method you're using, which in this case, is LoRA. The [`LoraConfig`] specifies the task type and important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, make sure you replace `LoraConfig` with the appropriate [class](../package_reference/tuners).
355
+
356
+ ```diff
357
+ def main():
358
+ + accelerator = Accelerator()
359
+ model_name_or_path = "facebook/bart-large"
360
+ dataset_name = "twitter_complaints"
361
+ + peft_config = LoraConfig(
362
+ task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
363
+ )
364
+ ```
365
+
366
+ Throughout the script, you'll see the [`~accelerate.Accelerator.main_process_first`] and [`~accelerate.Accelerator.wait_for_everyone`] functions which help control and synchronize when processes are executed.
367
+
368
+ The [`get_peft_model`] function takes a base model and the [`peft_config`] you prepared earlier to create a [`PeftModel`]:
369
+
370
+ ```diff
371
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
372
+ + model = get_peft_model(model, peft_config)
373
+ ```
374
+
375
+ Pass all the relevant training objects to 🤗 Accelerate's [`~accelerate.Accelerator.prepare`] which makes sure everything is ready for training:
376
+
377
+ ```py
378
+ model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler = accelerator.prepare(
379
+ model, train_dataloader, eval_dataloader, test_dataloader, optimizer, lr_scheduler
380
+ )
381
+ ```
382
+
383
+ The next bit of code checks whether the DeepSpeed plugin is used in the `Accelerator`, and if the plugin exists, then we check if we are using ZeRO-3. This conditional flag is used when calling `generate` function call during inference for syncing GPUs when the model parameters are sharded:
384
+
385
+ ```py
386
+ is_ds_zero_3 = False
387
+ if getattr(accelerator.state, "deepspeed_plugin", None):
388
+ is_ds_zero_3 = accelerator.state.deepspeed_plugin.zero_stage == 3
389
+ ```
390
+
391
+ Inside the training loop, the usual `loss.backward()` is replaced by 🤗 Accelerate's [`~accelerate.Accelerator.backward`] which uses the correct `backward()` method based on your configuration:
392
+
393
+ ```diff
394
+ for epoch in range(num_epochs):
395
+ with TorchTracemalloc() as tracemalloc:
396
+ model.train()
397
+ total_loss = 0
398
+ for step, batch in enumerate(tqdm(train_dataloader)):
399
+ outputs = model(**batch)
400
+ loss = outputs.loss
401
+ total_loss += loss.detach().float()
402
+ + accelerator.backward(loss)
403
+ optimizer.step()
404
+ lr_scheduler.step()
405
+ optimizer.zero_grad()
406
+ ```
407
+
408
+ That is all! The rest of the script handles the training loop, evaluation, and even pushes it to the Hub for you.
409
+
410
+ ## Train
411
+
412
+ Run the following command to launch the training script. Earlier, you saved the configuration file to `ds_zero3_cpu.yaml`, so you'll need to pass the path to the launcher with the `--config_file` argument like this:
413
+
414
+ ```bash
415
+ accelerate launch --config_file ds_zero3_cpu.yaml examples/peft_lora_seq2seq_accelerate_ds_zero3_offload.py
416
+ ```
417
+
418
+ You'll see some output logs that track memory usage during training, and once it's completed, the script returns the accuracy and compares the predictions to the labels:
419
+
420
+ ```bash
421
+ GPU Memory before entering the train : 1916
422
+ GPU Memory consumed at the end of the train (end-begin): 66
423
+ GPU Peak Memory consumed during the train (max-begin): 7488
424
+ GPU Total Peak Memory consumed during the train (max): 9404
425
+ CPU Memory before entering the train : 19411
426
+ CPU Memory consumed at the end of the train (end-begin): 0
427
+ CPU Peak Memory consumed during the train (max-begin): 0
428
+ CPU Total Peak Memory consumed during the train (max): 19411
429
+ epoch=4: train_ppl=tensor(1.0705, device='cuda:0') train_epoch_loss=tensor(0.0681, device='cuda:0')
430
+ 100%|████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:27<00:00, 3.92s/it]
431
+ GPU Memory before entering the eval : 1982
432
+ GPU Memory consumed at the end of the eval (end-begin): -66
433
+ GPU Peak Memory consumed during the eval (max-begin): 672
434
+ GPU Total Peak Memory consumed during the eval (max): 2654
435
+ CPU Memory before entering the eval : 19411
436
+ CPU Memory consumed at the end of the eval (end-begin): 0
437
+ CPU Peak Memory consumed during the eval (max-begin): 0
438
+ CPU Total Peak Memory consumed during the eval (max): 19411
439
+ accuracy=100.0
440
+ eval_preds[:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
441
+ dataset['train'][label_column][:10]=['no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint', 'no complaint', 'no complaint', 'complaint', 'complaint', 'no complaint']
442
+ ```
443
+
444
+ # Caveats
445
+ 1. Merging when using PEFT and DeepSpeed is currently unsupported and will raise error.
446
+ 2. When using CPU offloading, the major gains from using PEFT to shrink the optimizer states and gradients to that of the adapter weights would be realized on CPU RAM and there won't be savings with respect to GPU memory.
447
+ 3. DeepSpeed Stage 3 and qlora when used with CPU offloading leads to more GPU memory usage when compared to disabling CPU offloading.
peft_md_files/accelerate/fsdp.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Fully Sharded Data Parallel
6
+
7
+ [Fully sharded data parallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
8
+
9
+ Both of these features are supported in 🤗 Accelerate, and you can use them with 🤗 PEFT.
10
+
11
+ # Use PEFT and FSDP
12
+ This section of guide will help you learn how to use our DeepSpeed [training script](https://github.com/huggingface/peft/blob/main/examples/sft/train.py) for performing SFT. You'll configure the script to do SFT (supervised fine-tuning) of Llama-70B model with LoRA and FSDP on 8xH100 80GB GPUs on a single machine. You can configure it to scale to multiple machines by changing the accelerate config.
13
+
14
+ ## Configuration
15
+
16
+ Start by running the following command to [create a FSDP configuration file](https://huggingface.co/docs/accelerate/quicktour#launching-your-distributed-script) with 🤗 Accelerate. The `--config_file` flag allows you to save the configuration file to a specific location, otherwise it is saved as a `default_config.yaml` file in the 🤗 Accelerate cache.
17
+
18
+ The configuration file is used to set the default options when you launch the training script.
19
+
20
+ ```bash
21
+ accelerate config --config_file fsdp_config.yaml
22
+ ```
23
+
24
+ You'll be asked a few questions about your setup, and configure the following arguments. In this example, you'll answer the questionnaire as shown in the image below.
25
+ <div class="flex justify-center">
26
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/fsdp-peft-config.png"/>
27
+ </div>
28
+ <small>Creating Accelerate's config to use FSDP</small>
29
+
30
+ Once this is done, the corresponding config should look like below and you can find it in config folder at [fsdp_config.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config.yaml):
31
+
32
+ ```yml
33
+ compute_environment: LOCAL_MACHINE
34
+ debug: false
35
+ distributed_type: FSDP
36
+ downcast_bf16: 'no'
37
+ fsdp_config:
38
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
39
+ fsdp_backward_prefetch: BACKWARD_PRE
40
+ fsdp_cpu_ram_efficient_loading: true
41
+ fsdp_forward_prefetch: false
42
+ fsdp_offload_params: false
43
+ fsdp_sharding_strategy: FULL_SHARD
44
+ fsdp_state_dict_type: SHARDED_STATE_DICT
45
+ fsdp_sync_module_states: true
46
+ fsdp_use_orig_params: false
47
+ machine_rank: 0
48
+ main_training_function: main
49
+ mixed_precision: bf16
50
+ num_machines: 1
51
+ num_processes: 8
52
+ rdzv_backend: static
53
+ same_network: true
54
+ tpu_env: []
55
+ tpu_use_cluster: false
56
+ tpu_use_sudo: false
57
+ use_cpu: false
58
+ ```
59
+
60
+ ## Launch command
61
+
62
+ The launch command is available at [run_peft_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_fsdp.sh) and it is also shown below:
63
+ ```bash
64
+ accelerate launch --config_file "configs/fsdp_config.yaml" train.py \
65
+ --seed 100 \
66
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
67
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
68
+ --chat_template_format "chatml" \
69
+ --add_special_tokens False \
70
+ --append_concat_token False \
71
+ --splits "train,test" \
72
+ --max_seq_len 2048 \
73
+ --num_train_epochs 1 \
74
+ --logging_steps 5 \
75
+ --log_level "info" \
76
+ --logging_strategy "steps" \
77
+ --evaluation_strategy "epoch" \
78
+ --save_strategy "epoch" \
79
+ --push_to_hub \
80
+ --hub_private_repo True \
81
+ --hub_strategy "every_save" \
82
+ --bf16 True \
83
+ --packing True \
84
+ --learning_rate 1e-4 \
85
+ --lr_scheduler_type "cosine" \
86
+ --weight_decay 1e-4 \
87
+ --warmup_ratio 0.0 \
88
+ --max_grad_norm 1.0 \
89
+ --output_dir "llama-sft-lora-fsdp" \
90
+ --per_device_train_batch_size 8 \
91
+ --per_device_eval_batch_size 8 \
92
+ --gradient_accumulation_steps 4 \
93
+ --gradient_checkpointing True \
94
+ --use_reentrant False \
95
+ --dataset_text_field "content" \
96
+ --use_flash_attn True \
97
+ --use_peft_lora True \
98
+ --lora_r 8 \
99
+ --lora_alpha 16 \
100
+ --lora_dropout 0.1 \
101
+ --lora_target_modules "all-linear" \
102
+ --use_4bit_quantization False
103
+ ```
104
+
105
+ Notice that we are using LoRA with rank=8, alpha=16 and targeting all linear layers. We are passing the FSDP config file and finetuning the 70B Llama model on a subset of the [ultrachat dataset](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).
106
+
107
+ ## The important parts
108
+
109
+ Let's dive a little deeper into the script so you can see what's going on, and understand how it works.
110
+
111
+ The first thing to know is that the script uses FSDP for distributed training as the FSDP config has been passed. The `SFTTrainer` class handles all the heavy lifting of creating PEFT model using the peft config that is passed. After that when you call `trainer.train()`, Trainer internally uses 🤗 Accelerate to prepare model, optimizer and trainer using the FSDP config to create FSDP wrapped model which is then trained. The main code snippet is below:
112
+
113
+ ```python
114
+ # trainer
115
+ trainer = SFTTrainer(
116
+ model=model,
117
+ tokenizer=tokenizer,
118
+ args=training_args,
119
+ train_dataset=train_dataset,
120
+ eval_dataset=eval_dataset,
121
+ peft_config=peft_config,
122
+ packing=data_args.packing,
123
+ dataset_kwargs={
124
+ "append_concat_token": data_args.append_concat_token,
125
+ "add_special_tokens": data_args.add_special_tokens,
126
+ },
127
+ dataset_text_field=data_args.dataset_text_field,
128
+ max_seq_length=data_args.max_seq_length,
129
+ )
130
+ trainer.accelerator.print(f"{trainer.model}")
131
+ if model_args.use_peft_lora:
132
+ # handle PEFT+FSDP case
133
+ trainer.model.print_trainable_parameters()
134
+ if getattr(trainer.accelerator.state, "fsdp_plugin", None):
135
+ from peft.utils.other import fsdp_auto_wrap_policy
136
+
137
+ fsdp_plugin = trainer.accelerator.state.fsdp_plugin
138
+ fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
139
+
140
+ # train
141
+ checkpoint = None
142
+ if training_args.resume_from_checkpoint is not None:
143
+ checkpoint = training_args.resume_from_checkpoint
144
+ trainer.train(resume_from_checkpoint=checkpoint)
145
+
146
+ # saving final model
147
+ if trainer.is_fsdp_enabled:
148
+ trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
149
+ trainer.save_model()
150
+ ```
151
+
152
+
153
+ Here, one main thing to note currently when using FSDP with PEFT is that `use_orig_params` needs to be `False` to realize GPU memory savings. Due to `use_orig_params=False`, the auto wrap policy for FSDP needs to change so that trainable and non-trainable parameters are wrapped separately. This is done by the code snippt below which uses the util function `fsdp_auto_wrap_policy` from PEFT:
154
+
155
+ ```
156
+ if getattr(trainer.accelerator.state, "fsdp_plugin", None):
157
+ from peft.utils.other import fsdp_auto_wrap_policy
158
+
159
+ fsdp_plugin = trainer.accelerator.state.fsdp_plugin
160
+ fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
161
+ ```
162
+
163
+ ## Memory usage
164
+
165
+ In the above example, the memory consumed per GPU is 72-80 GB (90-98%) as seen in the screenshot below. The slight increase in GPU memory at the end is when saving the model using `FULL_STATE_DICT` state dict type instead of the `SHARDED_STATE_DICT` so that the model has adapter weights that can be loaded normally with `from_pretrained` method during inference:
166
+
167
+ <div class="flex justify-center">
168
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/peft_fsdp_mem_usage.png"/>
169
+ </div>
170
+ <small>GPU memory usage for the training run</small>
171
+
172
+ # Use PEFT QLoRA and FSDP for finetuning large models on multiple GPUs
173
+
174
+ In this section, we will look at how to use QLoRA and FSDP for finetuning 70B llama model on 2X24GB GPUs. [Answer.AI](https://www.answer.ai/) in collaboration with bitsandbytes and Hugging Face 🤗 open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost [You can now train a 70b language model at home](https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html). This is now integrated in Hugging Face ecosystem.
175
+
176
+ For this, we first need `bitsandbytes>=0.43.0`, `accelerate>=0.28.0`, `transformers>4.38.2`, `trl>0.7.11` and `peft>0.9.0`. We need to set `fsdp_cpu_ram_efficient_loading=true`, `fsdp_use_orig_params=false` and `fsdp_offload_params=true`(cpu offloading) when using Accelerate config. When not using accelerate launcher, you can alternately set the environment variable `export FSDP_CPU_RAM_EFFICIENT_LOADING=true`. Here, we will be using accelerate config and below is the config which can be found at [fsdp_config_qlora.yaml](https://github.com/huggingface/peft/blob/main/examples/sft/configs/fsdp_config_qlora.yaml):
177
+
178
+ ```yml
179
+ compute_environment: LOCAL_MACHINE
180
+ debug: false
181
+ distributed_type: FSDP
182
+ downcast_bf16: 'no'
183
+ fsdp_config:
184
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
185
+ fsdp_backward_prefetch: BACKWARD_PRE
186
+ fsdp_cpu_ram_efficient_loading: true
187
+ fsdp_forward_prefetch: false
188
+ fsdp_offload_params: true
189
+ fsdp_sharding_strategy: FULL_SHARD
190
+ fsdp_state_dict_type: SHARDED_STATE_DICT
191
+ fsdp_sync_module_states: true
192
+ fsdp_use_orig_params: false
193
+ machine_rank: 0
194
+ main_training_function: main
195
+ mixed_precision: 'no'
196
+ num_machines: 1
197
+ num_processes: 2
198
+ rdzv_backend: static
199
+ same_network: true
200
+ tpu_env: []
201
+ tpu_use_cluster: false
202
+ tpu_use_sudo: false
203
+ use_cpu: false
204
+ ```
205
+
206
+ Launch command is given below which is available at [run_peft_qlora_fsdp.sh](https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh):
207
+ ```
208
+ accelerate launch --config_file "configs/fsdp_config_qlora.yaml" train.py \
209
+ --seed 100 \
210
+ --model_name_or_path "meta-llama/Llama-2-70b-hf" \
211
+ --dataset_name "smangrul/ultrachat-10k-chatml" \
212
+ --chat_template_format "chatml" \
213
+ --add_special_tokens False \
214
+ --append_concat_token False \
215
+ --splits "train,test" \
216
+ --max_seq_len 2048 \
217
+ --num_train_epochs 1 \
218
+ --logging_steps 5 \
219
+ --log_level "info" \
220
+ --logging_strategy "steps" \
221
+ --evaluation_strategy "epoch" \
222
+ --save_strategy "epoch" \
223
+ --push_to_hub \
224
+ --hub_private_repo True \
225
+ --hub_strategy "every_save" \
226
+ --bf16 True \
227
+ --packing True \
228
+ --learning_rate 1e-4 \
229
+ --lr_scheduler_type "cosine" \
230
+ --weight_decay 1e-4 \
231
+ --warmup_ratio 0.0 \
232
+ --max_grad_norm 1.0 \
233
+ --output_dir "llama-sft-qlora-fsdp" \
234
+ --per_device_train_batch_size 2 \
235
+ --per_device_eval_batch_size 2 \
236
+ --gradient_accumulation_steps 2 \
237
+ --gradient_checkpointing True \
238
+ --use_reentrant True \
239
+ --dataset_text_field "content" \
240
+ --use_flash_attn True \
241
+ --use_peft_lora True \
242
+ --lora_r 8 \
243
+ --lora_alpha 16 \
244
+ --lora_dropout 0.1 \
245
+ --lora_target_modules "all-linear" \
246
+ --use_4bit_quantization True \
247
+ --use_nested_quant True \
248
+ --bnb_4bit_compute_dtype "bfloat16" \
249
+ --bnb_4bit_quant_storage_dtype "bfloat16"
250
+ ```
251
+
252
+ Notice the new argument being passed, `bnb_4bit_quant_storage_dtype`, which denotes the data type for packing the 4-bit parameters. For example, when it is set to `bfloat16`, **16/4 = 4** 4-bit params are packed together post quantization. When using mixed precision training with `bfloat16`, `bnb_4bit_quant_storage_dtype` can be either `bfloat16` for pure `bfloat16` finetuning, or `float32` for automatic mixed precision (this consumes more GPU memory). When using mixed precision training with `float16`, `bnb_4bit_quant_storage_dtype` should be set to `float32` for stable automatic mixed precision training.
253
+
254
+ In terms of training code, the important code changes are:
255
+
256
+ ```diff
257
+ ...
258
+
259
+ bnb_config = BitsAndBytesConfig(
260
+ load_in_4bit=args.use_4bit_quantization,
261
+ bnb_4bit_quant_type=args.bnb_4bit_quant_type,
262
+ bnb_4bit_compute_dtype=compute_dtype,
263
+ bnb_4bit_use_double_quant=args.use_nested_quant,
264
+ + bnb_4bit_quant_storage=quant_storage_dtype,
265
+ )
266
+
267
+ ...
268
+
269
+ model = AutoModelForCausalLM.from_pretrained(
270
+ args.model_name_or_path,
271
+ quantization_config=bnb_config,
272
+ trust_remote_code=True,
273
+ attn_implementation="flash_attention_2" if args.use_flash_attn else "eager",
274
+ + torch_dtype=quant_storage_dtype or torch.float32,
275
+ )
276
+ ```
277
+
278
+ Notice that `torch_dtype` for `AutoModelForCausalLM` is same as the `bnb_4bit_quant_storage` data type. That's it. Everything else is handled by Trainer and TRL.
279
+
280
+ ## Memory usage
281
+
282
+ In the above example, the memory consumed per GPU is **19.6 GB** while CPU RAM usage is around **107 GB**. When disabling CPU offloading, the GPU memory usage is **35.6 GB/ GPU**. Therefore, what took 16X80GB GPUs for full finetuning, 8X80GB GPUs with FSDP+LoRA, and a couple of 80GB GPUs with DDP+QLoRA, now requires 2X24GB GPUs. This makes finetuning of large models more accessible.
283
+
284
+ ## More resources
285
+ You can also refer the [llama-recipes](https://github.com/facebookresearch/llama-recipes/?tab=readme-ov-file#fine-tuning) repo and [Getting started with Llama](https://llama.meta.com/get-started/#fine-tuning) guide on how to finetune using FSDP and PEFT.
286
+
287
+ ## Caveats
288
+ 1. Merging when using PEFT and FSDP is currently unsupported and will raise error.
289
+ 2. Passing `modules_to_save` config parameter to is untested at present.
290
+ 3. GPU Memory saving when using CPU Offloading is untested at present.
291
+ 4. When using FSDP+QLoRA, `paged_adamw_8bit` currently results in an error when saving a checkpoint.
292
+ 5. DoRA training with FSDP should work (albeit at lower speed than LoRA). If combined with bitsandbytes (QDoRA), 4-bit quantization should also work, but 8-bit quantization has known issues and is not recommended.
peft_md_files/conceptual_guides/adapter.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Adapters
18
+
19
+ Adapter-based methods add extra trainable parameters after the attention and fully-connected layers of a frozen pretrained model to reduce memory-usage and speed up training. The method varies depending on the adapter, it could simply be an extra added layer or it could be expressing the weight updates ∆W as a low-rank decomposition of the weight matrix. Either way, the adapters are typically small but demonstrate comparable performance to a fully finetuned model and enable training larger models with fewer resources.
20
+
21
+ This guide will give you a brief overview of the adapter methods supported by PEFT (if you're interested in learning more details about a specific method, take a look at the linked paper).
22
+
23
+ ## Low-Rank Adaptation (LoRA)
24
+
25
+ <Tip>
26
+
27
+ LoRA is one of the most popular PEFT methods and a good starting point if you're just getting started with PEFT. It was originally developed for large language models but it is a tremendously popular training method for diffusion models because of its efficiency and effectiveness.
28
+
29
+ </Tip>
30
+
31
+ As mentioned briefly earlier, [LoRA](https://hf.co/papers/2106.09685) is a technique that accelerates finetuning large models while consuming less memory.
32
+
33
+ LoRA represents the weight updates ∆W with two smaller matrices (called *update matrices*) through low-rank decomposition. These new matrices can be trained to adapt to the new data while keeping the overall number of parameters low. The original weight matrix remains frozen and doesn't receive any further updates. To produce the final results, the original and extra adapted weights are combined. You could also merge the adapter weights with the base model to eliminate inference latency.
34
+
35
+ <div class="flex justify-center">
36
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_animated.gif"/>
37
+ </div>
38
+
39
+ This approach has a number of advantages:
40
+
41
+ * LoRA makes finetuning more efficient by drastically reducing the number of trainable parameters.
42
+ * The original pretrained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
43
+ * LoRA is orthogonal to other parameter-efficient methods and can be combined with many of them.
44
+ * Performance of models finetuned using LoRA is comparable to the performance of fully finetuned models.
45
+
46
+ In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. However, for simplicity and further parameter efficiency, LoRA is typically only applied to the attention blocks in Transformer models. The resulting number of trainable parameters in a LoRA model depends on the size of the update matrices, which is determined mainly by the rank `r` and the shape of the original weight matrix.
47
+
48
+ <div class="flex justify-center">
49
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora.png"/>
50
+ </div>
51
+ <small><a href="https://hf.co/papers/2103.10385">Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation</a></small>
52
+
53
+ ## Low-Rank Hadamard Product (LoHa)
54
+
55
+ Low-rank decomposition can impact performance because the weight updates are limited to the low-rank space, which can constrain a model's expressiveness. However, you don't necessarily want to use a larger rank because it increases the number of trainable parameters. To address this, [LoHa](https://huggingface.co/papers/2108.06098) (a method originally developed for computer vision) was applied to diffusion models where the ability to generate diverse images is an important consideration. LoHa should also work with general model types, but the embedding layers aren't currently implemented in PEFT.
56
+
57
+ LoHa uses the [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (element-wise product) instead of the matrix product. ∆W is represented by four smaller matrices instead of two - like in LoRA - and each pair of these low-rank matrices are combined with the Hadamard product. As a result, ∆W can have the same number of trainable parameters but a higher rank and expressivity.
58
+
59
+ ## Low-Rank Kronecker Product (LoKr)
60
+
61
+ [LoKr](https://hf.co/papers/2309.14859) is very similar to LoRA and LoHa, and it is also mainly applied to diffusion models, though you could also use it with other model types. LoKr replaces the matrix product with the [Kronecker product](https://en.wikipedia.org/wiki/Kronecker_product) instead. The Kronecker product decomposition creates a block matrix which preserves the rank of the original weight matrix. Another benefit of the Kronecker product is that it can be vectorized by stacking the matrix columns. This can speed up the process because you're avoiding fully reconstructing ∆W.
62
+
63
+ ## Orthogonal Finetuning (OFT)
64
+
65
+ <div class="flex justify-center">
66
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/oft.png"/>
67
+ </div>
68
+ <small><a href="https://hf.co/papers/2306.07280">Controlling Text-to-Image Diffusion by Orthogonal Finetuning</a></small>
69
+
70
+ [OFT](https://hf.co/papers/2306.07280) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).
71
+
72
+ OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.
73
+
74
+ ## Orthogonal Butterfly (BOFT)
75
+
76
+ [BOFT](https://hf.co/papers/2311.06243) is a method that primarily focuses on preserving a pretrained model's generative performance in the finetuned model. It tries to maintain the same cosine similarity (hyperspherical energy) between all pairwise neurons in a layer because this better captures the semantic information among neurons. This means OFT is more capable at preserving the subject and it is better for controllable generation (similar to [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet)).
77
+
78
+ OFT preserves the hyperspherical energy by learning an orthogonal transformation for neurons to keep the cosine similarity between them unchanged. In practice, this means taking the matrix product of an orthogonal matrix with the pretrained weight matrix. However, to be parameter-efficient, the orthogonal matrix is represented as a block-diagonal matrix with rank `r` blocks. Whereas LoRA reduces the number of trainable parameters with low-rank structures, OFT reduces the number of trainable parameters with a sparse block-diagonal matrix structure.
79
+
80
+ ## Adaptive Low-Rank Adaptation (AdaLoRA)
81
+
82
+ [AdaLoRA](https://hf.co/papers/2303.10512) manages the parameter budget introduced from LoRA by allocating more parameters - in other words, a higher rank `r` - for important weight matrices that are better adapted for a task and pruning less important ones. The rank is controlled by a method similar to singular value decomposition (SVD). The ∆W is parameterized with two orthogonal matrices and a diagonal matrix which contains singular values. This parametrization method avoids iteratively applying SVD which is computationally expensive. Based on this method, the rank of ∆W is adjusted according to an importance score. ∆W is divided into triplets and each triplet is scored according to its contribution to model performance. Triplets with low importance scores are pruned and triplets with high importance scores are kept for finetuning.
83
+
84
+ ## Llama-Adapter
85
+
86
+ [Llama-Adapter](https://hf.co/papers/2303.16199) is a method for adapting Llama into a instruction-following model. To help adapt the model for instruction-following, the adapter is trained with a 52K instruction-output dataset.
87
+
88
+ A set of of learnable adaption prompts are prefixed to the input instruction tokens. These are inserted into the upper layers of the model because it is better to learn with the higher-level semantics of the pretrained model. The instruction-output tokens prefixed to the input guide the adaption prompt to generate a contextual response.
89
+
90
+ <div class="flex justify-center">
91
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/llama-adapter.png"/>
92
+ </div>
93
+ <small><a href="https://hf.co/papers/2303.16199">LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention</a></small>
94
+
95
+ To avoid adding noise to the tokens, the adapter uses zero-initialized attention. On top of this, the adapter adds a learnable gating factor (initialized with zeros) to progressively add information to the model during training. This prevents overwhelming the model's pretrained knowledge with the newly learned instructions.
peft_md_files/conceptual_guides/ia3.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # IA3
18
+
19
+ This conceptual guide gives a brief overview of [IA3](https://arxiv.org/abs/2205.05638), a parameter-efficient fine tuning technique that is
20
+ intended to improve over [LoRA](./lora).
21
+
22
+ To make fine-tuning more efficient, IA3 (Infused Adapter by Inhibiting and Amplifying Inner Activations)
23
+ rescales inner activations with learned vectors. These learned vectors are injected in the attention and feedforward modules
24
+ in a typical transformer-based architecture. These learned vectors are the only trainable parameters during fine-tuning, and thus the original
25
+ weights remain frozen. Dealing with learned vectors (as opposed to learned low-rank updates to a weight matrix like LoRA)
26
+ keeps the number of trainable parameters much smaller.
27
+
28
+ Being similar to LoRA, IA3 carries many of the same advantages:
29
+
30
+ * IA3 makes fine-tuning more efficient by drastically reducing the number of trainable parameters. (For T0, an IA3 model only has about 0.01% trainable parameters, while even LoRA has > 0.1%)
31
+ * The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable IA3 models for various downstream tasks built on top of them.
32
+ * Performance of models fine-tuned using IA3 is comparable to the performance of fully fine-tuned models.
33
+ * IA3 does not add any inference latency because adapter weights can be merged with the base model.
34
+
35
+ In principle, IA3 can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
36
+ parameters. Following the authors' implementation, IA3 weights are added to the key, value and feedforward layers
37
+ of a Transformer model. To be specific, for transformer models, IA3 weights are added to the outputs of key and value layers, and to the input of the second feedforward layer
38
+ in each transformer block.
39
+
40
+ Given the target layers for injecting IA3 parameters, the number of trainable parameters
41
+ can be determined based on the size of the weight matrices.
42
+
43
+
44
+ ## Common IA3 parameters in PEFT
45
+
46
+ As with other methods supported by PEFT, to fine-tune a model using IA3, you need to:
47
+
48
+ 1. Instantiate a base model.
49
+ 2. Create a configuration (`IA3Config`) where you define IA3-specific parameters.
50
+ 3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
51
+ 4. Train the `PeftModel` as you normally would train the base model.
52
+
53
+ `IA3Config` allows you to control how IA3 is applied to the base model through the following parameters:
54
+
55
+ - `target_modules`: The modules (for example, attention blocks) to apply the IA3 vectors.
56
+ - `feedforward_modules`: The list of modules to be treated as feedforward layers in `target_modules`. While learned vectors are multiplied with
57
+ the output activation for attention blocks, the vectors are multiplied with the input for classic feedforward layers. Note that `feedforward_modules` must be a subset of `target_modules`.
58
+ - `modules_to_save`: List of modules apart from IA3 layers to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
59
+
60
+ ## Example Usage
61
+
62
+ For the task of sequence classification, one can initialize the IA3 config for a Llama model as follows:
63
+
64
+ ```py
65
+ peft_config = IA3Config(
66
+ task_type=TaskType.SEQ_CLS, target_modules=["k_proj", "v_proj", "down_proj"], feedforward_modules=["down_proj"]
67
+ )
68
+ ```
peft_md_files/conceptual_guides/oft.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Orthogonal Finetuning (OFT and BOFT)
18
+
19
+ This conceptual guide gives a brief overview of [OFT](https://arxiv.org/abs/2306.07280) and [BOFT](https://arxiv.org/abs/2311.06243), a parameter-efficient fine-tuning technique that utilizes orthogonal matrix to multiplicatively transform the pretrained weight matrices.
20
+
21
+ To achieve efficient fine-tuning, OFT represents the weight updates with an orthogonal transformation. The orthogonal transformation is parameterized by an orthogonal matrix multiplied to the pretrained weight matrix. These new matrices can be trained to adapt to the new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn’t receive any further adjustments. To produce the final results, both the original and the adapted weights are multiplied togethor.
22
+
23
+ Orthogonal Butterfly (BOFT) generalizes OFT with Butterfly factorization and further improves its parameter efficiency and finetuning flexibility. In short, OFT can be viewed as a special case of BOFT. Different from LoRA that uses additive low-rank weight updates, BOFT uses multiplicative orthogonal weight updates. The comparison is shown below.
24
+
25
+ <div class="flex justify-center">
26
+ <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/BOFT_comparison.png"/>
27
+ </div>
28
+
29
+
30
+ BOFT has some advantages compared to LoRA:
31
+
32
+ * BOFT proposes a simple yet generic way to finetune pretrained models to downstream tasks, yielding a better preservation of pretraining knowledge and a better parameter efficiency.
33
+ * Through the orthogonality, BOFT introduces a structural constraint, i.e., keeping the [hyperspherical energy](https://arxiv.org/abs/1805.09298) unchanged during finetuning. This can effectively reduce the forgetting of pretraining knowledge.
34
+ * BOFT uses the butterfly factorization to efficiently parameterize the orthogonal matrix, which yields a compact yet expressive learning space (i.e., hypothesis class).
35
+ * The sparse matrix decomposition in BOFT brings in additional inductive biases that are beneficial to generalization.
36
+
37
+ In principle, BOFT can be applied to any subset of weight matrices in a neural network to reduce the number of trainable parameters. Given the target layers for injecting BOFT parameters, the number of trainable parameters can be determined based on the size of the weight matrices.
38
+
39
+ ## Merge OFT/BOFT weights into the base model
40
+
41
+ Similar to LoRA, the weights learned by OFT/BOFT can be integrated into the pretrained weight matrices using the merge_and_unload() function. This function merges the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.
42
+
43
+ <div class="flex justify-center">
44
+ <img src="https://raw.githubusercontent.com/wy1iu/butterfly-oft/main/assets/boft_merge.png"/>
45
+ </div>
46
+
47
+ This works because during training, the orthogonal weight matrix (R in the diagram above) and the pretrained weight matrices are separate. But once training is complete, these weights can actually be merged (multiplied) into a new weight matrix that is equivalent.
48
+
49
+ ## Utils for OFT / BOFT
50
+
51
+ ### Common OFT / BOFT parameters in PEFT
52
+
53
+ As with other methods supported by PEFT, to fine-tune a model using OFT or BOFT, you need to:
54
+
55
+ 1. Instantiate a base model.
56
+ 2. Create a configuration (`OFTConfig` or `BOFTConfig`) where you define OFT/BOFT-specific parameters.
57
+ 3. Wrap the base model with `get_peft_model()` to get a trainable `PeftModel`.
58
+ 4. Train the `PeftModel` as you normally would train the base model.
59
+
60
+
61
+ ### BOFT-specific paramters
62
+
63
+ `BOFTConfig` allows you to control how OFT/BOFT is applied to the base model through the following parameters:
64
+
65
+ - `boft_block_size`: the BOFT matrix block size across different layers, expressed in `int`. Smaller block size results in sparser update matrices with fewer trainable paramters. **Note**, please choose `boft_block_size` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
66
+ specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
67
+ - `boft_block_num`: the number of BOFT matrix blocks across different layers, expressed in `int`. Fewer blocks result in sparser update matrices with fewer trainable paramters. **Note**, please choose `boft_block_num` to be divisible by most layer's input dimension (`in_features`), e.g., 4, 8, 16. Also, please only
68
+ specify either `boft_block_size` or `boft_block_num`, but not both simultaneously or leaving both to 0, because `boft_block_size` x `boft_block_num` must equal the layer's input dimension.
69
+ - `boft_n_butterfly_factor`: the number of butterfly factors. **Note**, for `boft_n_butterfly_factor=1`, BOFT is the same as vanilla OFT, for `boft_n_butterfly_factor=2`, the effective block size of OFT becomes twice as big and the number of blocks become half.
70
+ - `bias`: specify if the `bias` parameters should be trained. Can be `"none"`, `"all"` or `"boft_only"`.
71
+ - `boft_dropout`: specify the probability of multiplicative dropout.
72
+ - `target_modules`: The modules (for example, attention blocks) to inject the OFT/BOFT matrices.
73
+ - `modules_to_save`: List of modules apart from OFT/BOFT matrices to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.
74
+
75
+
76
+
77
+ ## BOFT Example Usage
78
+
79
+ For an example of the BOFT method application to various downstream tasks, please refer to the following guides:
80
+
81
+ Take a look at the following step-by-step guides on how to finetune a model with BOFT:
82
+ - [Dreambooth finetuning with BOFT](../task_guides/boft_dreambooth)
83
+ - [Controllable generation finetuning with BOFT (ControlNet)](../task_guides/boft_controlnet)
84
+
85
+ For the task of image classification, one can initialize the BOFT config for a DinoV2 model as follows:
86
+
87
+ ```py
88
+ import transformers
89
+ from transformers import AutoModelForSeq2SeqLM, BOFTConfig
90
+ from peft import BOFTConfig, get_peft_model
91
+
92
+ config = BOFTConfig(
93
+ boft_block_size=4,
94
+ boft_n_butterfly_factor=2,
95
+ target_modules=["query", "value", "key", "output.dense", "mlp.fc1", "mlp.fc2"],
96
+ boft_dropout=0.1,
97
+ bias="boft_only",
98
+ modules_to_save=["classifier"],
99
+ )
100
+
101
+ model = transformers.Dinov2ForImageClassification.from_pretrained(
102
+ "facebook/dinov2-large",
103
+ num_labels=100,
104
+ )
105
+
106
+ boft_model = get_peft_model(model, config)
107
+ ```
peft_md_files/conceptual_guides/prompting.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
2
+ rendered properly in your Markdown viewer.
3
+ -->
4
+
5
+ # Soft prompts
6
+
7
+ Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as *prompting*. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model's parameters.
8
+
9
+ There are two categories of prompting methods:
10
+
11
+ - hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt
12
+ - soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren't human readable because you aren't matching these "virtual tokens" to the embeddings of a real word
13
+
14
+ This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, P-tuning, and multitask prompt tuning.
15
+
16
+ ## Prompt tuning
17
+
18
+ <div class="flex justify-center">
19
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prompt-tuning.png"/>
20
+ </div>
21
+ <small>Only train and store a significantly smaller set of task-specific prompt parameters <a href="https://hf.co/papers/2104.08691">(image source)</a>.</small>
22
+
23
+ [Prompt tuning](https://hf.co/papers/2104.08691) was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are *generated*. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters.
24
+
25
+ The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model's parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases.
26
+
27
+ Take a look at [Prompt tuning for causal language modeling](../task_guides/clm-prompt-tuning) for a step-by-step guide on how to train a model with prompt tuning.
28
+
29
+ ## Prefix tuning
30
+
31
+ <div class="flex justify-center">
32
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/prefix-tuning.png"/>
33
+ </div>
34
+ <small>Optimize the prefix parameters for each task <a href="https://hf.co/papers/2101.00190">(image source)</a>.</small>
35
+
36
+ [Prefix tuning](https://hf.co/papers/2101.00190) was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model's parameters frozen.
37
+
38
+ The main difference is that the prefix parameters are inserted in **all** of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts.
39
+
40
+ As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings.
41
+
42
+ Take a look at [Prefix tuning for conditional generation](../task_guides/seq2seq-prefix-tuning) for a step-by-step guide on how to train a model with prefix tuning.
43
+
44
+ ## P-tuning
45
+
46
+ <div class="flex justify-center">
47
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/p-tuning.png"/>
48
+ </div>
49
+ <small>Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder <a href="https://hf.co/papers/2103.10385">(image source)</a>.</small>
50
+
51
+ [P-tuning](https://hf.co/papers/2103.10385) is designed for natural language understanding (NLU) tasks and all language models.
52
+ It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though:
53
+
54
+ - the prompt tokens can be inserted anywhere in the input sequence, and it isn't restricted to only the beginning
55
+ - the prompt tokens are only added to the input instead of adding them to every layer of the model
56
+ - introducing *anchor* tokens can improve performance because they indicate characteristics of a component in the input sequence
57
+
58
+ The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks.
59
+
60
+ Take a look at [P-tuning for sequence classification](../task_guides/ptuning-seq-classification) for a step-by-step guide on how to train a model with P-tuning.
61
+
62
+ ## Multitask prompt tuning
63
+
64
+ <div class="flex justify-center">
65
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt.png"/>
66
+ </div>
67
+ <small><a href="https://hf.co/papers/2303.02861">Multitask prompt tuning enables parameter-efficient transfer learning</a>.</small>
68
+
69
+ [Multitask prompt tuning (MPT)](https://hf.co/papers/2303.02861) learns a single prompt from data for multiple task types that can be shared for different target tasks. Other existing approaches learn a separate soft prompt for each task that need to be retrieved or aggregated for adaptation to target tasks. MPT consists of two stages:
70
+
71
+ 1. source training - for each task, its soft prompt is decomposed into task-specific vectors. The task-specific vectors are multiplied together to form another matrix W, and the Hadamard product is used between W and a shared prompt matrix P to generate a task-specific prompt matrix. The task-specific prompts are distilled into a single prompt matrix that is shared across all tasks. This prompt is trained with multitask training.
72
+ 2. target adaptation - to adapt the single prompt for a target task, a target prompt is initialized and expressed as the Hadamard product of the shared prompt matrix and the task-specific low-rank prompt matrix.
73
+
74
+ <div class="flex justify-center">
75
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/mpt-decomposition.png"/>
76
+ </div>
77
+ <small><a href="https://hf.co/papers/2103.10385">Prompt decomposition</a>.</small>
peft_md_files/developer_guides/checkpoint.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # PEFT checkpoint format
18
+
19
+ This document describes how PEFT's checkpoint files are structured and how to convert between the PEFT format and other formats.
20
+
21
+ ## PEFT files
22
+
23
+ PEFT (parameter-efficient fine-tuning) methods only update a small subset of a model's parameters rather than all of them. This is nice because checkpoint files can generally be much smaller than the original model files and are easier to store and share. However, this also means that to load a PEFT model, you need to have the original model available as well.
24
+
25
+ When you call [`~PeftModel.save_pretrained`] on a PEFT model, the PEFT model saves three files, described below:
26
+
27
+ 1. `adapter_model.safetensors` or `adapter_model.bin`
28
+
29
+ By default, the model is saved in the `safetensors` format, a secure alternative to the `bin` format, which is known to be susceptible to [security vulnerabilities](https://huggingface.co/docs/hub/security-pickle) because it uses the pickle utility under the hood. Both formats store the same `state_dict` though, and are interchangeable.
30
+
31
+ The `state_dict` only contains the parameters of the adapter module, not the base model. To illustrate the difference in size, a normal BERT model requires ~420MB of disk space, whereas an IA³ adapter on top of this BERT model only requires ~260KB.
32
+
33
+ 2. `adapter_config.json`
34
+
35
+ The `adapter_config.json` file contains the configuration of the adapter module, which is necessary to load the model. Below is an example of an `adapter_config.json` for an IA³ adapter with standard settings applied to a BERT model:
36
+
37
+ ```json
38
+ {
39
+ "auto_mapping": {
40
+ "base_model_class": "BertModel",
41
+ "parent_library": "transformers.models.bert.modeling_bert"
42
+ },
43
+ "base_model_name_or_path": "bert-base-uncased",
44
+ "fan_in_fan_out": false,
45
+ "feedforward_modules": [
46
+ "output.dense"
47
+ ],
48
+ "inference_mode": true,
49
+ "init_ia3_weights": true,
50
+ "modules_to_save": null,
51
+ "peft_type": "IA3",
52
+ "revision": null,
53
+ "target_modules": [
54
+ "key",
55
+ "value",
56
+ "output.dense"
57
+ ],
58
+ "task_type": null
59
+ }
60
+ ```
61
+
62
+ The configuration file contains:
63
+
64
+ - the adapter module type stored, `"peft_type": "IA3"`
65
+ - information about the base model like `"base_model_name_or_path": "bert-base-uncased"`
66
+ - the revision of the model (if any), `"revision": null`
67
+
68
+ If the base model is not a pretrained Transformers model, the latter two entries will be `null`. Other than that, the settings are all related to the specific IA³ adapter that was used to fine-tune the model.
69
+
70
+ 3. `README.md`
71
+
72
+ The generated `README.md` is the model card of a PEFT model and contains a few pre-filled entries. The intent of this is to make it easier to share the model with others and to provide some basic information about the model. This file is not needed to load the model.
73
+
74
+ ## Convert to PEFT format
75
+
76
+ When converting from another format to the PEFT format, we require both the `adapter_model.safetensors` (or `adapter_model.bin`) file and the `adapter_config.json` file.
77
+
78
+ ### adapter_model
79
+
80
+ For the model weights, it is important to use the correct mapping from parameter name to value for PEFT to load the file. Getting this mapping right is an exercise in checking the implementation details, as there is no generally agreed upon format for PEFT adapters.
81
+
82
+ Fortunately, figuring out this mapping is not overly complicated for common base cases. Let's look at a concrete example, the [`LoraLayer`](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py):
83
+
84
+ ```python
85
+ # showing only part of the code
86
+
87
+ class LoraLayer(BaseTunerLayer):
88
+ # All names of layers that may contain (trainable) adapter weights
89
+ adapter_layer_names = ("lora_A", "lora_B", "lora_embedding_A", "lora_embedding_B")
90
+ # All names of other parameters that may contain adapter-related parameters
91
+ other_param_names = ("r", "lora_alpha", "scaling", "lora_dropout")
92
+
93
+ def __init__(self, base_layer: nn.Module, **kwargs) -> None:
94
+ self.base_layer = base_layer
95
+ self.r = {}
96
+ self.lora_alpha = {}
97
+ self.scaling = {}
98
+ self.lora_dropout = nn.ModuleDict({})
99
+ self.lora_A = nn.ModuleDict({})
100
+ self.lora_B = nn.ModuleDict({})
101
+ # For Embedding layer
102
+ self.lora_embedding_A = nn.ParameterDict({})
103
+ self.lora_embedding_B = nn.ParameterDict({})
104
+ # Mark the weight as unmerged
105
+ self._disable_adapters = False
106
+ self.merged_adapters = []
107
+ self.use_dora: dict[str, bool] = {}
108
+ self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
109
+ self._caches: dict[str, Any] = {}
110
+ self.kwargs = kwargs
111
+ ```
112
+
113
+ In the `__init__` code used by all `LoraLayer` classes in PEFT, there are a bunch of parameters used to initialize the model, but only a few are relevant for the checkpoint file: `lora_A`, `lora_B`, `lora_embedding_A`, and `lora_embedding_B`. These parameters are listed in the class attribute `adapter_layer_names` and contain the learnable parameters, so they must be included in the checkpoint file. All the other parameters, like the rank `r`, are derived from the `adapter_config.json` and must be included there (unless the default value is used).
114
+
115
+ Let's check the `state_dict` of a PEFT LoRA model applied to BERT. When printing the first five keys using the default LoRA settings (the remaining keys are the same, just with different layer numbers), we get:
116
+
117
+ - `base_model.model.encoder.layer.0.attention.self.query.lora_A.weight`
118
+ - `base_model.model.encoder.layer.0.attention.self.query.lora_B.weight`
119
+ - `base_model.model.encoder.layer.0.attention.self.value.lora_A.weight`
120
+ - `base_model.model.encoder.layer.0.attention.self.value.lora_B.weight`
121
+ - `base_model.model.encoder.layer.1.attention.self.query.lora_A.weight`
122
+ - etc.
123
+
124
+ Let's break this down:
125
+
126
+ - By default, for BERT models, LoRA is applied to the `query` and `value` layers of the attention module. This is why you see `attention.self.query` and `attention.self.value` in the key names for each layer.
127
+ - LoRA decomposes the weights into two low-rank matrices, `lora_A` and `lora_B`. This is where `lora_A` and `lora_B` come from in the key names.
128
+ - These LoRA matrices are implemented as `nn.Linear` layers, so the parameters are stored in the `.weight` attribute (`lora_A.weight`, `lora_B.weight`).
129
+ - By default, LoRA isn't applied to BERT's embedding layer, so there are _no entries_ for `lora_A_embedding` and `lora_B_embedding`.
130
+ - The keys of the `state_dict` always start with `"base_model.model."`. The reason is that, in PEFT, we wrap the base model inside a tuner-specific model (`LoraModel` in this case), which itself is wrapped in a general PEFT model (`PeftModel`). For this reason, these two prefixes are added to the keys. When converting to the PEFT format, it is required to add these prefixes.
131
+
132
+ <Tip>
133
+
134
+ This last point is not true for prefix tuning techniques like prompt tuning. There, the extra embeddings are directly stored in the `state_dict` without any prefixes added to the keys.
135
+
136
+ </Tip>
137
+
138
+ When inspecting the parameter names in the loaded model, you might be surprised to find that they look a bit different, e.g. `base_model.model.encoder.layer.0.attention.self.query.lora_A.default.weight`. The difference is the *`.default`* part in the second to last segment. This part exists because PEFT generally allows the addition of multiple adapters at once (using an `nn.ModuleDict` or `nn.ParameterDict` to store them). For example, if you add another adapter called "other", the key for that adapter would be `base_model.model.encoder.layer.0.attention.self.query.lora_A.other.weight`.
139
+
140
+ When you call [`~PeftModel.save_pretrained`], the adapter name is stripped from the keys. The reason is that the adapter name is not an important part of the model architecture; it is just an arbitrary name. When loading the adapter, you could choose a totally different name, and the model would still work the same way. This is why the adapter name is not stored in the checkpoint file.
141
+
142
+ <Tip>
143
+
144
+ If you call `save_pretrained("some/path")` and the adapter name is not `"default"`, the adapter is stored in a sub-directory with the same name as the adapter. So if the name is "other", it would be stored inside of `some/path/other`.
145
+
146
+ </Tip>
147
+
148
+ In some circumstances, deciding which values to add to the checkpoint file can become a bit more complicated. For example, in PEFT, DoRA is implemented as a special case of LoRA. If you want to convert a DoRA model to PEFT, you should create a LoRA checkpoint with extra entries for DoRA. You can see this in the `__init__` of the previous `LoraLayer` code:
149
+
150
+ ```python
151
+ self.lora_magnitude_vector: Optional[torch.nn.ParameterDict] = None # for DoRA
152
+ ```
153
+
154
+ This indicates that there is an optional extra parameter per layer for DoRA.
155
+
156
+ ### adapter_config
157
+
158
+ All the other information needed to load a PEFT model is contained in the `adapter_config.json` file. Let's check this file for a LoRA model applied to BERT:
159
+
160
+ ```json
161
+ {
162
+ "alpha_pattern": {},
163
+ "auto_mapping": {
164
+ "base_model_class": "BertModel",
165
+ "parent_library": "transformers.models.bert.modeling_bert"
166
+ },
167
+ "base_model_name_or_path": "bert-base-uncased",
168
+ "bias": "none",
169
+ "fan_in_fan_out": false,
170
+ "inference_mode": true,
171
+ "init_lora_weights": true,
172
+ "layer_replication": null,
173
+ "layers_pattern": null,
174
+ "layers_to_transform": null,
175
+ "loftq_config": {},
176
+ "lora_alpha": 8,
177
+ "lora_dropout": 0.0,
178
+ "megatron_config": null,
179
+ "megatron_core": "megatron.core",
180
+ "modules_to_save": null,
181
+ "peft_type": "LORA",
182
+ "r": 8,
183
+ "rank_pattern": {},
184
+ "revision": null,
185
+ "target_modules": [
186
+ "query",
187
+ "value"
188
+ ],
189
+ "task_type": null,
190
+ "use_dora": false,
191
+ "use_rslora": false
192
+ }
193
+ ```
194
+
195
+ This contains a lot of entries, and at first glance, it could feel overwhelming to figure out all the right values to put in there. However, most of the entries are not necessary to load the model. This is either because they use the default values and don't need to be added or because they only affect the initialization of the LoRA weights, which is irrelevant when it comes to loading the model. If you find that you don't know what a specific parameter does, e.g., `"use_rslora",` don't add it, and you should be fine. Also note that as more options are added, this file will get more entries in the future, but it should be backward compatible.
196
+
197
+ At the minimum, you should include the following entries:
198
+
199
+ ```json
200
+ {
201
+ "target_modules": ["query", "value"],
202
+ "peft_type": "LORA"
203
+ }
204
+ ```
205
+
206
+ However, adding as many entries as possible, like the rank `r` or the `base_model_name_or_path` (if it's a Transformers model) is recommended. This information can help others understand the model better and share it more easily. To check which keys and values are expected, check out the [config.py](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/config.py) file (as an example, this is the config file for LoRA) in the PEFT source code.
207
+
208
+ ## Model storage
209
+
210
+ In some circumstances, you might want to store the whole PEFT model, including the base weights. This can be necessary if, for instance, the base model is not available to the users trying to load the PEFT model. You can merge the weights first or convert it into a Transformer model.
211
+
212
+ ### Merge the weights
213
+
214
+ The most straightforward way to store the whole PEFT model is to merge the adapter weights into the base weights:
215
+
216
+ ```python
217
+ merged_model = model.merge_and_unload()
218
+ merged_model.save_pretrained(...)
219
+ ```
220
+
221
+ There are some disadvantages to this approach, though:
222
+
223
+ - Once [`~LoraModel.merge_and_unload`] is called, you get a basic model without any PEFT-specific functionality. This means you can't use any of the PEFT-specific methods anymore.
224
+ - You cannot unmerge the weights, load multiple adapters at once, disable the adapter, etc.
225
+ - Not all PEFT methods support merging weights.
226
+ - Some PEFT methods may generally allow merging, but not with specific settings (e.g. when using certain quantization techniques).
227
+ - The whole model will be much larger than the PEFT model, as it will contain all the base weights as well.
228
+
229
+ But inference with a merged model should be a bit faster.
230
+
231
+ ### Convert to a Transformers model
232
+
233
+ Another way to save the whole model, assuming the base model is a Transformers model, is to use this hacky approach to directly insert the PEFT weights into the base model and save it, which only works if you "trick" Transformers into believing the PEFT model is not a PEFT model. This only works with LoRA because other adapters are not implemented in Transformers.
234
+
235
+ ```python
236
+ model = ... # the PEFT model
237
+ ...
238
+ # after you finish training the model, save it in a temporary location
239
+ model.save_pretrained(<temp_location>)
240
+ # now load this model directly into a transformers model, without the PEFT wrapper
241
+ # the PEFT weights are directly injected into the base model
242
+ model_loaded = AutoModel.from_pretrained(<temp_location>)
243
+ # now make the loaded model believe that it is _not_ a PEFT model
244
+ model_loaded._hf_peft_config_loaded = False
245
+ # now when we save it, it will save the whole model
246
+ model_loaded.save_pretrained(<final_location>)
247
+ # or upload to Hugging Face Hub
248
+ model_loaded.push_to_hub(<final_location>)
249
+ ```
250
+
peft_md_files/developer_guides/contributing.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Contribute to PEFT
18
+
19
+ We are happy to accept contributions to PEFT. If you plan to contribute, please read this to make the process as smooth as possible.
20
+
21
+ ## Installation
22
+
23
+ For code contributions to PEFT, you should choose the ["source"](../install#source) installation method.
24
+
25
+ If you are new to creating a pull request, follow the [Creating a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) guide by GitHub.
26
+
27
+ ## Tests and code quality checks
28
+
29
+ Regardless of the contribution type (unless it’s only about the docs), you should run tests and code quality checks before creating a PR to ensure your contribution doesn’t break anything and follows the project standards.
30
+
31
+ We provide a Makefile to execute the necessary tests. Run the code below for the unit test:
32
+
33
+ ```sh
34
+ make test
35
+ ```
36
+
37
+ Run one of the following to either only check or check and fix code quality and style:
38
+
39
+ ```sh
40
+ make quality # just check
41
+ make style # check and fix
42
+ ```
43
+
44
+ You can also set up [`pre-commit`](https://pre-commit.com/) to run these fixes
45
+ automatically as Git commit hooks.
46
+
47
+ ```bash
48
+ $ pip install pre-commit
49
+ $ pre-commit install
50
+ ```
51
+
52
+ Running all the tests can take a couple of minutes, so during development it can be more efficient to only run tests specific to your change:
53
+
54
+ ```sh
55
+ pytest tests/ -k <name-of-test>
56
+ ```
57
+
58
+ This should finish much quicker and allow for faster iteration. However, you should still run the whole test suite before creating a PR because your change can inadvertently break tests that at first glance are unrelated.
59
+
60
+ If your change is specific to a hardware setting (e.g., it requires CUDA), take a look at [tests/test_gpu_examples.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_gpu_examples.py) and [tests/test_common_gpu.py](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/tests/test_common_gpu.py) to see if it makes sense to add tests there. If your change could have an effect on saving and loading models, please run the tests with the `--regression` flag to trigger regression tests.
61
+
62
+ It can happen that while you’re working on your PR, the underlying code base changes due to other changes being merged. If that happens – especially when there is a merge conflict – please update your branch with the latest changes. This can be a merge or a rebase, and we'll squash and merge the PR once it’s ready.
63
+
64
+ ## PR description
65
+
66
+ When opening a PR, please provide a nice description of the change you're proposing. If it relates to other issues or PRs, please reference them. Providing a good description not only helps the reviewers review your code better and faster, it can also be used later (as a basis) for the commit message which helps with long term maintenance of the project.
67
+
68
+ If your code makes some non-trivial changes, it may also be a good idea to add comments to the code to explain those changes. For example, if you had to iterate on your implementation multiple times because the most obvious way didn’t work, it’s a good indication that a code comment is needed.
69
+
70
+ ## Bugfixes
71
+
72
+ Please give a description of the circumstances that led to the bug. If there is an existing issue, please link to it (e.g., “Resolves #12345”).
73
+
74
+ Ideally when a bugfix is provided, it should be accompanied by a test for the bug. The test should fail with the current code and pass with the bugfix. Add a comment to the test that references the issue or PR. Without a test, it is more difficult to prevent regressions in the future.
75
+
76
+ ## Add a new fine-tuning method
77
+
78
+ New parameter-efficient fine-tuning methods are developed all the time. If you would like to add a new and promising method to PEFT, please follow these steps.
79
+
80
+ 1. Before you start to implement the new method, please open a GitHub issue with your proposal. This way, the maintainers can give you some early feedback.
81
+ 2. Please add a link to the source (usually a paper) of the method. Some evidence should be provided there is general interest in using the method. We will not add new methods that are freshly published, but there is no evidence of demand for it.
82
+ 3. When implementing the method, it makes sense to look for existing implementations that already exist as a guide. Moreover, when you structure your code, please take inspiration from the other PEFT methods. For example, if your method is similar to LoRA, it makes sense to structure your code similarly or even reuse some functions or classes where it makes sense (some code duplication is okay, but don’t overdo it).
83
+ 4. Ideally, in addition to the implementation of the new method, there should also be examples (notebooks, scripts), documentation, and an extensive test suite that proves the method works with a variety of tasks. However, this can be more challenging so it is acceptable to only provide the implementation and at least one working example. Documentation and tests can be added in follow up PRs.
84
+ 5. Once you have something that seems to be working, don’t hesitate to create a draft PR even if it’s not in a mergeable state yet. The maintainers are happy to give you feedback and guidance along the way.
85
+
86
+ ## Add other features
87
+
88
+ It is best if you first open an issue on GitHub with a proposal to add the new feature. This way, you can discuss with the maintainers if it makes sense to add the feature before spending too much time on implementing it.
89
+
90
+ New features should generally be accompanied by tests and documentation or examples. Without the latter, users will have a hard time discovering your cool new feature.
91
+
92
+ Changes to the code should be implemented in a backward-compatible way. For example, existing code should continue to work the same way after the feature is merged.
peft_md_files/developer_guides/custom_models.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # Custom models
18
+
19
+ Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is
20
+ assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like
21
+ [LoRA](../conceptual_guides/lora) - are not restricted to specific model types.
22
+
23
+ In this guide, we will see how LoRA can be applied to a multilayer perceptron, a computer vision model from the [timm](https://huggingface.co/docs/timm/index) library, or a new 🤗 Transformers architecture.
24
+
25
+ ## Multilayer perceptron
26
+
27
+ Let's assume that we want to fine-tune a multilayer perceptron with LoRA. Here is the definition:
28
+
29
+ ```python
30
+ from torch import nn
31
+
32
+
33
+ class MLP(nn.Module):
34
+ def __init__(self, num_units_hidden=2000):
35
+ super().__init__()
36
+ self.seq = nn.Sequential(
37
+ nn.Linear(20, num_units_hidden),
38
+ nn.ReLU(),
39
+ nn.Linear(num_units_hidden, num_units_hidden),
40
+ nn.ReLU(),
41
+ nn.Linear(num_units_hidden, 2),
42
+ nn.LogSoftmax(dim=-1),
43
+ )
44
+
45
+ def forward(self, X):
46
+ return self.seq(X)
47
+ ```
48
+
49
+ This is a straightforward multilayer perceptron with an input layer, a hidden layer, and an output layer.
50
+
51
+ <Tip>
52
+
53
+ For this toy example, we choose an exceedingly large number of hidden units to highlight the efficiency gains
54
+ from PEFT, but those gains are in line with more realistic examples.
55
+
56
+ </Tip>
57
+
58
+ There are a few linear layers in this model that could be tuned with LoRA. When working with common 🤗 Transformers
59
+ models, PEFT will know which layers to apply LoRA to, but in this case, it is up to us as a user to choose the layers.
60
+ To determine the names of the layers to tune:
61
+
62
+ ```python
63
+ print([(n, type(m)) for n, m in MLP().named_modules()])
64
+ ```
65
+
66
+ This should print:
67
+
68
+ ```
69
+ [('', __main__.MLP),
70
+ ('seq', torch.nn.modules.container.Sequential),
71
+ ('seq.0', torch.nn.modules.linear.Linear),
72
+ ('seq.1', torch.nn.modules.activation.ReLU),
73
+ ('seq.2', torch.nn.modules.linear.Linear),
74
+ ('seq.3', torch.nn.modules.activation.ReLU),
75
+ ('seq.4', torch.nn.modules.linear.Linear),
76
+ ('seq.5', torch.nn.modules.activation.LogSoftmax)]
77
+ ```
78
+
79
+ Let's say we want to apply LoRA to the input layer and to the hidden layer, those are `'seq.0'` and `'seq.2'`. Moreover,
80
+ let's assume we want to update the output layer without LoRA, that would be `'seq.4'`. The corresponding config would
81
+ be:
82
+
83
+ ```python
84
+ from peft import LoraConfig
85
+
86
+ config = LoraConfig(
87
+ target_modules=["seq.0", "seq.2"],
88
+ modules_to_save=["seq.4"],
89
+ )
90
+ ```
91
+
92
+ With that, we can create our PEFT model and check the fraction of parameters trained:
93
+
94
+ ```python
95
+ from peft import get_peft_model
96
+
97
+ model = MLP()
98
+ peft_model = get_peft_model(model, config)
99
+ peft_model.print_trainable_parameters()
100
+ # prints trainable params: 56,164 || all params: 4,100,164 || trainable%: 1.369798866581922
101
+ ```
102
+
103
+ Finally, we can use any training framework we like, or write our own fit loop, to train the `peft_model`.
104
+
105
+ For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/multilayer_perceptron/multilayer_perceptron_lora.ipynb).
106
+
107
+ ## timm models
108
+
109
+ The [timm](https://huggingface.co/docs/timm/index) library contains a large number of pretrained computer vision models.
110
+ Those can also be fine-tuned with PEFT. Let's check out how this works in practice.
111
+
112
+ To start, ensure that timm is installed in the Python environment:
113
+
114
+ ```bash
115
+ python -m pip install -U timm
116
+ ```
117
+
118
+ Next we load a timm model for an image classification task:
119
+
120
+ ```python
121
+ import timm
122
+
123
+ num_classes = ...
124
+ model_id = "timm/poolformer_m36.sail_in1k"
125
+ model = timm.create_model(model_id, pretrained=True, num_classes=num_classes)
126
+ ```
127
+
128
+ Again, we need to make a decision about what layers to apply LoRA to. Since LoRA supports 2D conv layers, and since
129
+ those are a major building block of this model, we should apply LoRA to the 2D conv layers. To identify the names of
130
+ those layers, let's look at all the layer names:
131
+
132
+ ```python
133
+ print([(n, type(m)) for n, m in model.named_modules()])
134
+ ```
135
+
136
+ This will print a very long list, we'll only show the first few:
137
+
138
+ ```
139
+ [('', timm.models.metaformer.MetaFormer),
140
+ ('stem', timm.models.metaformer.Stem),
141
+ ('stem.conv', torch.nn.modules.conv.Conv2d),
142
+ ('stem.norm', torch.nn.modules.linear.Identity),
143
+ ('stages', torch.nn.modules.container.Sequential),
144
+ ('stages.0', timm.models.metaformer.MetaFormerStage),
145
+ ('stages.0.downsample', torch.nn.modules.linear.Identity),
146
+ ('stages.0.blocks', torch.nn.modules.container.Sequential),
147
+ ('stages.0.blocks.0', timm.models.metaformer.MetaFormerBlock),
148
+ ('stages.0.blocks.0.norm1', timm.layers.norm.GroupNorm1),
149
+ ('stages.0.blocks.0.token_mixer', timm.models.metaformer.Pooling),
150
+ ('stages.0.blocks.0.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
151
+ ('stages.0.blocks.0.drop_path1', torch.nn.modules.linear.Identity),
152
+ ('stages.0.blocks.0.layer_scale1', timm.models.metaformer.Scale),
153
+ ('stages.0.blocks.0.res_scale1', torch.nn.modules.linear.Identity),
154
+ ('stages.0.blocks.0.norm2', timm.layers.norm.GroupNorm1),
155
+ ('stages.0.blocks.0.mlp', timm.layers.mlp.Mlp),
156
+ ('stages.0.blocks.0.mlp.fc1', torch.nn.modules.conv.Conv2d),
157
+ ('stages.0.blocks.0.mlp.act', torch.nn.modules.activation.GELU),
158
+ ('stages.0.blocks.0.mlp.drop1', torch.nn.modules.dropout.Dropout),
159
+ ('stages.0.blocks.0.mlp.norm', torch.nn.modules.linear.Identity),
160
+ ('stages.0.blocks.0.mlp.fc2', torch.nn.modules.conv.Conv2d),
161
+ ('stages.0.blocks.0.mlp.drop2', torch.nn.modules.dropout.Dropout),
162
+ ('stages.0.blocks.0.drop_path2', torch.nn.modules.linear.Identity),
163
+ ('stages.0.blocks.0.layer_scale2', timm.models.metaformer.Scale),
164
+ ('stages.0.blocks.0.res_scale2', torch.nn.modules.linear.Identity),
165
+ ('stages.0.blocks.1', timm.models.metaformer.MetaFormerBlock),
166
+ ('stages.0.blocks.1.norm1', timm.layers.norm.GroupNorm1),
167
+ ('stages.0.blocks.1.token_mixer', timm.models.metaformer.Pooling),
168
+ ('stages.0.blocks.1.token_mixer.pool', torch.nn.modules.pooling.AvgPool2d),
169
+ ...
170
+ ('head.global_pool.flatten', torch.nn.modules.linear.Identity),
171
+ ('head.norm', timm.layers.norm.LayerNorm2d),
172
+ ('head.flatten', torch.nn.modules.flatten.Flatten),
173
+ ('head.drop', torch.nn.modules.linear.Identity),
174
+ ('head.fc', torch.nn.modules.linear.Linear)]
175
+ ]
176
+ ```
177
+
178
+ Upon closer inspection, we see that the 2D conv layers have names such as `"stages.0.blocks.0.mlp.fc1"` and
179
+ `"stages.0.blocks.0.mlp.fc2"`. How can we match those layer names specifically? You can write a [regular
180
+ expressions](https://docs.python.org/3/library/re.html) to match the layer names. For our case, the regex
181
+ `r".*\.mlp\.fc\d"` should do the job.
182
+
183
+ Furthermore, as in the first example, we should ensure that the output layer, in this case the classification head, is
184
+ also updated. Looking at the end of the list printed above, we can see that it's named `'head.fc'`. With that in mind,
185
+ here is our LoRA config:
186
+
187
+ ```python
188
+ config = LoraConfig(target_modules=r".*\.mlp\.fc\d", modules_to_save=["head.fc"])
189
+ ```
190
+
191
+ Then we only need to create the PEFT model by passing our base model and the config to `get_peft_model`:
192
+
193
+ ```python
194
+ peft_model = get_peft_model(model, config)
195
+ peft_model.print_trainable_parameters()
196
+ # prints trainable params: 1,064,454 || all params: 56,467,974 || trainable%: 1.88505789139876
197
+ ```
198
+
199
+ This shows us that we only need to train less than 2% of all parameters, which is a huge efficiency gain.
200
+
201
+ For a complete example, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/image_classification/image_classification_timm_peft_lora.ipynb).
202
+
203
+ ## New transformers architectures
204
+
205
+ When new popular transformers architectures are released, we do our best to quickly add them to PEFT. If you come across a transformers model that is not supported out of the box, don't worry, it will most likely still work if the config is set correctly. Specifically, you have to identify the layers that should be adapted and set them correctly when initializing the corresponding config class, e.g. `LoraConfig`. Here are some tips to help with this.
206
+
207
+ As a first step, it is a good idea is to check the existing models for inspiration. You can find them inside of [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) in the PEFT repository. Often, you'll find a similar architecture that uses the same names. For example, if the new model architecture is a variation of the "mistral" model and you want to apply LoRA, you can see that the entry for "mistral" in `TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING` contains `["q_proj", "v_proj"]`. This tells you that for "mistral" models, the `target_modules` for LoRA should be `["q_proj", "v_proj"]`:
208
+
209
+ ```python
210
+ from peft import LoraConfig, get_peft_model
211
+
212
+ my_mistral_model = ...
213
+ config = LoraConfig(
214
+ target_modules=["q_proj", "v_proj"],
215
+ ..., # other LoRA arguments
216
+ )
217
+ peft_model = get_peft_model(my_mistral_model, config)
218
+ ```
219
+
220
+ If that doesn't help, check the existing modules in your model architecture with the `named_modules` method and try to identify the attention layers, especially the key, query, and value layers. Those will often have names such as `c_attn`, `query`, `q_proj`, etc. The key layer is not always adapted, and ideally, you should check whether including it results in better performance.
221
+
222
+ Additionally, linear layers are common targets to be adapted (e.g. in [QLoRA paper](https://arxiv.org/abs/2305.14314), authors suggest to adapt them as well). Their names will often contain the strings `fc` or `dense`.
223
+
224
+ If you want to add a new model to PEFT, please create an entry in [constants.py](https://github.com/huggingface/peft/blob/main/src/peft/utils/constants.py) and open a pull request on the [repository](https://github.com/huggingface/peft/pulls). Don't forget to update the [README](https://github.com/huggingface/peft#models-support-matrix) as well.
225
+
226
+ ## Verify parameters and layers
227
+
228
+ You can verify whether you've correctly applied a PEFT method to your model in a few ways.
229
+
230
+ * Check the fraction of parameters that are trainable with the [`~PeftModel.print_trainable_parameters`] method. If this number is lower or higher than expected, check the model `repr` by printing the model. This shows the names of all the layer types in the model. Ensure that only the intended target layers are replaced by the adapter layers. For example, if LoRA is applied to `nn.Linear` layers, then you should only see `lora.Linear` layers being used.
231
+
232
+ ```py
233
+ peft_model.print_trainable_parameters()
234
+ ```
235
+
236
+ * Another way you can view the adapted layers is to use the `targeted_module_names` attribute to list the name of each module that was adapted.
237
+
238
+ ```python
239
+ print(peft_model.targeted_module_names)
240
+ ```
241
+
242
+ ## Unsupported module types
243
+
244
+ Methods like LoRA only work if the target modules are supported by PEFT. For example, it's possible to apply LoRA to `nn.Linear` and `nn.Conv2d` layers, but not, for instance, to `nn.LSTM`. If you find a layer class you want to apply PEFT to is not supported, you can:
245
+
246
+ - define a custom mapping to dynamically dispatch custom modules in LoRA
247
+ - open an [issue](https://github.com/huggingface/peft/issues) and request the feature where maintainers will implement it or guide you on how to implement it yourself if demand for this module type is sufficiently high
248
+
249
+ ### Experimental support for dynamic dispatch of custom modules in LoRA
250
+
251
+ > [!WARNING]
252
+ > This feature is experimental and subject to change, depending on its reception by the community. We will introduce a public and stable API if there is significant demand for it.
253
+
254
+ PEFT supports an experimental API for custom module types for LoRA. Let's assume you have a LoRA implementation for LSTMs. Normally, you would not be able to tell PEFT to use it, even if it would theoretically work with PEFT. However, this is possible with dynamic dispatch of custom layers.
255
+
256
+ The experimental API currently looks like this:
257
+
258
+ ```python
259
+ class MyLoraLSTMLayer:
260
+ ...
261
+
262
+ base_model = ... # load the base model that uses LSTMs
263
+
264
+ # add the LSTM layer names to target_modules
265
+ config = LoraConfig(..., target_modules=["lstm"])
266
+ # define a mapping from base layer type to LoRA layer type
267
+ custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
268
+ # register the new mapping
269
+ config._register_custom_module(custom_module_mapping)
270
+ # after registration, create the PEFT model
271
+ peft_model = get_peft_model(base_model, config)
272
+ # do training
273
+ ```
274
+
275
+ <Tip>
276
+
277
+ When you call [`get_peft_model`], you will see a warning because PEFT does not recognize the targeted module type. In this case, you can ignore this warning.
278
+
279
+ </Tip>
280
+
281
+ By supplying a custom mapping, PEFT first checks the base model's layers against the custom mapping and dispatches to the custom LoRA layer type if there is a match. If there is no match, PEFT checks the built-in LoRA layer types for a match.
282
+
283
+ Therefore, this feature can also be used to override existing dispatch logic, e.g. if you want to use your own LoRA layer for `nn.Linear` instead of using the one provided by PEFT.
284
+
285
+ When creating your custom LoRA module, please follow the same rules as the [existing LoRA modules](https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/layer.py). Some important constraints to consider:
286
+
287
+ - The custom module should inherit from `nn.Module` and `peft.tuners.lora.layer.LoraLayer`.
288
+ - The `__init__` method of the custom module should have the positional arguments `base_layer` and `adapter_name`. After this, there are additional `**kwargs` that you are free to use or ignore.
289
+ - The learnable parameters should be stored in an `nn.ModuleDict` or `nn.ParameterDict`, where the key corresponds to the name of the specific adapter (remember that a model can have more than one adapter at a time).
290
+ - The name of these learnable parameter attributes should start with `"lora_"`, e.g. `self.lora_new_param = ...`.
291
+ - Some methods are optional, e.g. you only need to implement `merge` and `unmerge` if you want to support weight merging.
292
+
293
+ Currently, the information about the custom module does not persist when you save the model. When loading the model, you have to register the custom modules again.
294
+
295
+ ```python
296
+ # saving works as always and includes the parameters of the custom modules
297
+ peft_model.save_pretrained(<model-path>)
298
+
299
+ # loading the model later:
300
+ base_model = ...
301
+ # load the LoRA config that you saved earlier
302
+ config = LoraConfig.from_pretrained(<model-path>)
303
+ # register the custom module again, the same way as the first time
304
+ custom_module_mapping = {nn.LSTM: MyLoraLSTMLayer}
305
+ config._register_custom_module(custom_module_mapping)
306
+ # pass the config instance to from_pretrained:
307
+ peft_model = PeftModel.from_pretrained(model, tmp_path / "lora-custom-module", config=config)
308
+ ```
309
+
310
+ If you use this feature and find it useful, or if you encounter problems, let us know by creating an issue or a discussion on GitHub. This allows us to estimate the demand for this feature and add a public API if it is sufficiently high.
peft_md_files/developer_guides/lora.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
+
3
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+ the License. You may obtain a copy of the License at
5
+
6
+ http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+ Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+ specific language governing permissions and limitations under the License.
11
+
12
+ ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
+ rendered properly in your Markdown viewer.
14
+
15
+ -->
16
+
17
+ # LoRA
18
+
19
+ LoRA is low-rank decomposition method to reduce the number of trainable parameters which speeds up finetuning large models and uses less memory. In PEFT, using LoRA is as easy as setting up a [`LoraConfig`] and wrapping it with [`get_peft_model`] to create a trainable [`PeftModel`].
20
+
21
+ This guide explores in more detail other options and features for using LoRA.
22
+
23
+ ## Initialization
24
+
25
+ The initialization of LoRA weights is controlled by the parameter `init_lora_weights` in [`LoraConfig`]. By default, PEFT initializes LoRA weights with Kaiming-uniform for weight A and zeros for weight B resulting in an identity transform (same as the reference [implementation](https://github.com/microsoft/LoRA)).
26
+
27
+ It is also possible to pass `init_lora_weights="gaussian"`. As the name suggests, this initializes weight A with a Gaussian distribution and zeros for weight B (this is how [Diffusers](https://huggingface.co/docs/diffusers/index) initializes LoRA weights).
28
+
29
+ ```py
30
+ from peft import LoraConfig
31
+
32
+ config = LoraConfig(init_lora_weights="gaussian", ...)
33
+ ```
34
+
35
+ There is also an option to set `init_lora_weights=False` which is useful for debugging and testing. This should be the only time you use this option. When choosing this option, the LoRA weights are initialized such that they do *not* result in an identity transform.
36
+
37
+ ```py
38
+ from peft import LoraConfig
39
+
40
+ config = LoraConfig(init_lora_weights=False, ...)
41
+ ```
42
+
43
+ ### PiSSA
44
+ [PiSSA](https://arxiv.org/abs/2404.02948) initializes the LoRA adapter using the principal singular values and singular vectors. This straightforward modification allows PiSSA to converge more rapidly than LoRA and ultimately attain superior performance. Moreover, PiSSA reduces the quantization error compared to QLoRA, leading to further enhancements.
45
+
46
+ Configure the initialization method to "pissa", which may take several minutes to execute SVD on the pre-trained model:
47
+ ```python
48
+ from peft import LoraConfig
49
+ config = LoraConfig(init_lora_weights="pissa", ...)
50
+ ```
51
+ Alternatively, execute fast SVD, which takes only a few seconds. The number of iterations determines the trade-off between the error and computation time:
52
+ ```python
53
+ lora_config = LoraConfig(init_lora_weights="pissa_niter_[number of iters]", ...)
54
+ ```
55
+ For detailed instruction on using PiSSA, please follow [these instructions](https://github.com/fxmeng/peft/tree/main/examples/pissa_finetuning).
56
+
57
+ ### OLoRA
58
+ [OLoRA](https://arxiv.org/abs/2406.01775) utilizes QR decomposition to initialize the LoRA adapters. OLoRA translates the base weights of the model by a factor of their QR decompositions, i.e., it mutates the weights before performing any training on them. This approach significantly improves stability, accelerates convergence speed, and ultimately achieves superior performance.
59
+
60
+ You just need to pass a single additional option to use OLoRA:
61
+ ```python
62
+ from peft import LoraConfig
63
+ config = LoraConfig(init_lora_weights="olora", ...)
64
+ ```
65
+ For more advanced usage, please refer to our [documentation](https://github.com/huggingface/peft/tree/main/examples/olora_finetuning).
66
+ ### LoftQ
67
+
68
+ #### Standard approach
69
+
70
+ When quantizing the base model for QLoRA training, consider using the [LoftQ initialization](https://arxiv.org/abs/2310.08659), which has been shown to improve performance when training quantized models. The idea is that the LoRA weights are initialized such that the quantization error is minimized. To use LoftQ, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
71
+
72
+ In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
73
+
74
+ #### A more convienient way
75
+
76
+ An easier but more limited way to apply LoftQ initialization is to use the convenience function `replace_lora_weights_loftq`. This takes the quantized PEFT model as input and replaces the LoRA weights in-place with their LoftQ-initialized counterparts.
77
+
78
+ ```python
79
+ from peft import replace_lora_weights_loftq
80
+ from transformers import BitsAndBytesConfig
81
+
82
+ bnb_config = BitsAndBytesConfig(load_in_4bit=True, ...)
83
+ base_model = AutoModelForCausalLM.from_pretrained(..., quantization_config=bnb_config)
84
+ # note: don't pass init_lora_weights="loftq" or loftq_config!
85
+ lora_config = LoraConfig(task_type="CAUSAL_LM")
86
+ peft_model = get_peft_model(base_model, lora_config)
87
+ replace_lora_weights_loftq(peft_model)
88
+ ```
89
+
90
+ `replace_lora_weights_loftq` also allows you to pass a `callback` argument to give you more control over which layers should be modified or not, which empirically can improve the results quite a lot. To see a more elaborate example of this, check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/loftq_finetuning/LoftQ_weight_replacement.ipynb).
91
+
92
+ `replace_lora_weights_loftq` implements only one iteration step of LoftQ. This means that only the LoRA weights are updated, instead of iteratevily updating LoRA weights and quantized base model weights. This may lead to lower performance but has the advantage that we can use the original quantized weights derived from the base model, instead of having to keep an extra copy of modified quantized weights. Whether this tradeoff is worthwhile depends on the use case.
93
+
94
+ At the moment, `replace_lora_weights_loftq` has these additional limitations:
95
+
96
+ - Model files must be stored as a `safetensors` file.
97
+ - Only bitsandbytes 4bit quantization is supported.
98
+
99
+ <Tip>
100
+
101
+ Learn more about how PEFT works with quantization in the [Quantization](quantization) guide.
102
+
103
+ </Tip>
104
+
105
+ ### Rank-stabilized LoRA
106
+
107
+ Another way to initialize [`LoraConfig`] is with the [rank-stabilized LoRA (rsLoRA)](https://huggingface.co/papers/2312.03732) method. The LoRA architecture scales each adapter during every forward pass by a fixed scalar which is set at initialization and depends on the rank `r`. The scalar is given by `lora_alpha/r` in the original implementation, but rsLoRA uses `lora_alpha/math.sqrt(r)` which stabilizes the adapters and increases the performance potential from using a higher `r`.
108
+
109
+ ```py
110
+ from peft import LoraConfig
111
+
112
+ config = LoraConfig(use_rslora=True, ...)
113
+ ```
114
+
115
+ ### Weight-Decomposed Low-Rank Adaptation (DoRA)
116
+
117
+ This technique decomposes the updates of the weights into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is handled by a separate learnable parameter. This can improve the performance of LoRA, especially at low ranks. For more information on DoRA, see https://arxiv.org/abs/2402.09353.
118
+
119
+ ```py
120
+ from peft import LoraConfig
121
+
122
+ config = LoraConfig(use_dora=True, ...)
123
+ ```
124
+
125
+ If parts of the model or the DoRA adapter are offloaded to CPU you can get a significant speedup at the cost of some temporary (ephemeral) VRAM overhead by using `ephemeral_gpu_offload=True` in `config.runtime_config`.
126
+
127
+ ```py
128
+ from peft import LoraConfig, LoraRuntimeConfig
129
+
130
+ config = LoraConfig(use_dora=True, runtime_config=LoraRuntimeConfig(ephemeral_gpu_offload=True), ...)
131
+ ```
132
+
133
+ A `PeftModel` with a DoRA adapter can also be loaded with `ephemeral_gpu_offload=True` flag using the `from_pretrained` method as well as the `load_adapter` method.
134
+
135
+ ```py
136
+ from peft import PeftModel
137
+
138
+ model = PeftModel.from_pretrained(base_model, peft_model_id, ephemeral_gpu_offload=True)
139
+ ```
140
+
141
+ #### Caveats
142
+
143
+ - DoRA only supports linear and Conv2d layers at the momement.
144
+ - DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference, see [`LoraModel.merge_and_unload`].
145
+ - DoRA should work with weights quantized with bitsandbytes ("QDoRA"). However, issues have been reported when using QDoRA with DeepSpeed Zero2.
146
+
147
+ ### QLoRA-style training
148
+
149
+ The default LoRA settings in PEFT add trainable weights to the query and value layers of each attention block. But [QLoRA](https://hf.co/papers/2305.14314), which adds trainable weights to all the linear layers of a transformer model, can provide performance equal to a fully finetuned model. To apply LoRA to all the linear layers, like in QLoRA, set `target_modules="all-linear"` (easier than specifying individual modules by name which can vary depending on the architecture).
150
+
151
+ ```py
152
+ config = LoraConfig(target_modules="all-linear", ...)
153
+ ```
154
+
155
+ ### Memory efficient Layer Replication with LoRA
156
+
157
+ An approach used to improve the performance of models is to expand a model by duplicating layers in the model to build a larger model from a pretrained model of a given size. For example increasing a 7B model to a 10B model as described in the [SOLAR](https://arxiv.org/abs/2312.15166) paper. PEFT LoRA supports this kind of expansion in a memory efficient manner that supports further fine-tuning using LoRA adapters attached to the layers post replication of the layers. The replicated layers do not take additional memory as they share the underlying weights so the only additional memory required is the memory for the adapter weights. To use this feature you would create a config with the `layer_replication` argument.
158
+
159
+ ```py
160
+ config = LoraConfig(layer_replication=[[0,4], [2,5]], ...)
161
+ ```
162
+
163
+ Assuming the original model had 5 layers `[0, 1, 2 ,3, 4]`, this would create a model with 7 layers arranged as `[0, 1, 2, 3, 2, 3, 4]`. This follows the [mergekit](https://github.com/arcee-ai/mergekit) pass through merge convention where sequences of layers specified as start inclusive and end exclusive tuples are stacked to build the final model. Each layer in the final model gets its own distinct set of LoRA adpaters.
164
+
165
+ [Fewshot-Metamath-OrcaVicuna-Mistral-10B](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B) is an example of a model trained using this method on Mistral-7B expanded to 10B. The
166
+ [adapter_config.json](https://huggingface.co/abacusai/Fewshot-Metamath-OrcaVicuna-Mistral-10B/blob/main/adapter_config.json) shows a sample LoRA adapter config applying this method for fine-tuning.
167
+
168
+ ## Optimizers
169
+
170
+ LoRA training can optionally include special purpose optimizers. Currently the only such optimizer is LoRA+.
171
+
172
+ ### LoRA+ optimized LoRA
173
+
174
+ LoRA training can be optimized using [LoRA+](https://arxiv.org/abs/2402.12354), which uses different learning rates for the adapter matrices A and B, shown to increase finetuning speed by up to 2x and performance by 1-2%.
175
+
176
+ ```py
177
+ from peft import LoraConfig, get_peft_model
178
+ from peft.optimizers import create_loraplus_optimizer
179
+ from transformers import Trainer
180
+ import bitsandbytes as bnb
181
+
182
+ base_model = ...
183
+ config = LoraConfig(...)
184
+ model = get_peft_model(base_model, config)
185
+
186
+ optimizer = create_loraplus_optimizer(
187
+ model=model,
188
+ optimizer_cls=bnb.optim.Adam8bit,
189
+ lr=5e-5,
190
+ loraplus_lr_ratio=16,
191
+ )
192
+ scheduler = None
193
+
194
+ ...
195
+ trainer = Trainer(
196
+ ...,
197
+ optimizers=(optimizer, scheduler),
198
+ )
199
+ ```
200
+
201
+ ## Merge LoRA weights into the base model
202
+
203
+ While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA adapter. To eliminate latency, use the [`~LoraModel.merge_and_unload`] function to merge the adapter weights with the base model. This allows you to use the newly merged model as a standalone model. The [`~LoraModel.merge_and_unload`] function doesn't keep the adapter weights in memory.
204
+
205
+ Below is a diagram that explains the intuition of LoRA adapter merging:
206
+
207
+ <div class="flex justify-center">
208
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"/>
209
+ </div>
210
+
211
+ We show in the snippets below how to run that using PEFT.
212
+
213
+ ```py
214
+ from transformers import AutoModelForCausalLM
215
+ from peft import PeftModel
216
+
217
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
218
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
219
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
220
+ model.merge_and_unload()
221
+ ```
222
+
223
+ If you need to keep a copy of the weights so you can unmerge the adapter later or delete and load different ones, you should use the [`~LoraModel.merge_adapter`] function instead. Now you have the option to use [`~LoraModel.unmerge_adapter`] to return the base model.
224
+
225
+ ```py
226
+ from transformers import AutoModelForCausalLM
227
+ from peft import PeftModel
228
+
229
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
230
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
231
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
232
+ model.merge_adapter()
233
+
234
+ # unmerge the LoRA layers from the base model
235
+ model.unmerge_adapter()
236
+ ```
237
+
238
+ The [`~LoraModel.add_weighted_adapter`] function is useful for merging multiple LoRAs into a new adapter based on a user provided weighting scheme in the `weights` parameter. Below is an end-to-end example.
239
+
240
+ First load the base model:
241
+
242
+ ```python
243
+ from transformers import AutoModelForCausalLM
244
+ from peft import PeftModel
245
+ import torch
246
+
247
+ base_model = AutoModelForCausalLM.from_pretrained(
248
+ "mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, device_map="auto"
249
+ )
250
+ ```
251
+
252
+ Then we load the first adapter:
253
+
254
+ ```python
255
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
256
+ model = PeftModel.from_pretrained(base_model, peft_model_id, adapter_name="sft")
257
+ ```
258
+
259
+ Then load a different adapter and merge it with the first one:
260
+
261
+ ```python
262
+ weighted_adapter_name = "sft-dpo"
263
+ model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
264
+ model.add_weighted_adapter(
265
+ adapters=["sft", "dpo"],
266
+ weights=[0.7, 0.3],
267
+ adapter_name=weighted_adapter_name,
268
+ combination_type="linear"
269
+ )
270
+ model.set_adapter(weighted_adapter_name)
271
+ ```
272
+
273
+ <Tip>
274
+
275
+ There are several supported methods for `combination_type`. Refer to the [documentation](../package_reference/lora#peft.LoraModel.add_weighted_adapter) for more details. Note that "svd" as the `combination_type` is not supported when using `torch.float16` or `torch.bfloat16` as the datatype.
276
+
277
+ </Tip>
278
+
279
+ Now, perform inference:
280
+
281
+ ```python
282
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
283
+
284
+ prompt = "Hey, are you conscious? Can you talk to me?"
285
+ inputs = tokenizer(prompt, return_tensors="pt")
286
+ inputs = {k: v.to("cuda") for k, v in inputs.items()}
287
+
288
+ with torch.no_grad():
289
+ generate_ids = model.generate(**inputs, max_length=30)
290
+ outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
291
+ print(outputs)
292
+ ```
293
+
294
+ ## Load adapters
295
+
296
+ Adapters can be loaded onto a pretrained model with [`~PeftModel.load_adapter`], which is useful for trying out different adapters whose weights aren't merged. Set the active adapter weights with the [`~LoraModel.set_adapter`] function.
297
+
298
+ ```py
299
+ from transformers import AutoModelForCausalLM
300
+ from peft import PeftModel
301
+
302
+ base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
303
+ peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
304
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
305
+
306
+ # load different adapter
307
+ model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
308
+
309
+ # set adapter as active
310
+ model.set_adapter("dpo")
311
+ ```
312
+
313
+ To return the base model, you could use [`~LoraModel.unload`] to unload all of the LoRA modules or [`~LoraModel.delete_adapter`] to delete the adapter entirely.
314
+
315
+ ```py
316
+ # unload adapter
317
+ model.unload()
318
+
319
+ # delete adapter
320
+ model.delete_adapter("dpo")
321
+ ```
322
+
323
+ ## Inference with different LoRA adapters in the same batch
324
+
325
+ Normally, each inference batch has to use the same adapter(s) in PEFT. This can sometimes be annoying, because we may have batches that contain samples intended to be used with different LoRA adapters. For example, we could have a base model that works well in English and two more LoRA adapters, one for French and one for German. Usually, we would have to split our batches such that each batch only contains samples of one of the languages, we cannot combine different languages in the same batch.
326
+
327
+ Thankfully, it is possible to mix different LoRA adapters in the same batch using the `adapter_name` argument. Below, we show an examle of how this works in practice. First, let's load the base model, English, and the two adapters, French and German, like this:
328
+
329
+ ```python
330
+ from transformers import AutoTokenizer, AutoModelForCausalLM
331
+ from peft import PeftModel
332
+
333
+ model_id = ...
334
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
335
+
336
+ model = AutoModelForCausalLM.from_pretrained(model_id)
337
+ # load the LoRA adapter for French
338
+ peft_model = PeftModel.from_pretrained(model, <path>, adapter_name="adapter_fr")
339
+ # next, load the LoRA adapter for German
340
+ peft_model.load_adapter(<path>, adapter_name="adapter_de")
341
+ ```
342
+
343
+ Now, we want to generate text on a sample that contains all three languages: The first three samples are in English, the next three are in French, and the last three are in German. We can use the `adapter_names` argument to specify which adapter to use for each sample. Since our base model is used for English, we use the special string `"__base__"` for these samples. For the next three samples, we indicate the adapter name of the French LoRA fine-tune, in this case `"adapter_fr"`. For the last three samples, we indicate the adapter name of the German LoRA fine-tune, in this case `"adapter_de"`. This way, we can use the base model and the two adapters in a single batch.
344
+
345
+ ```python
346
+ inputs = tokenizer(
347
+ [
348
+ "Hello, my dog is cute",
349
+ "Hello, my cat is awesome",
350
+ "Hello, my fish is great",
351
+ "Salut, mon chien est mignon",
352
+ "Salut, mon chat est génial",
353
+ "Salut, mon poisson est super",
354
+ "Hallo, mein Hund ist süß",
355
+ "Hallo, meine Katze ist toll",
356
+ "Hallo, mein Fisch ist großartig",
357
+ ],
358
+ return_tensors="pt",
359
+ padding=True,
360
+ )
361
+
362
+ adapter_names = [
363
+ "__base__", "__base__", "__base__",
364
+ "adapter_fr", "adapter_fr", "adapter_fr",
365
+ "adapter_de", "adapter_de", "adapter_de",
366
+ ]
367
+ output = peft_model.generate(**inputs, adapter_names=adapter_names, max_new_tokens=20)
368
+ ```
369
+
370
+ Note that the order does not matter here, i.e. the samples in the batch don't need to be grouped by adapter as in the example above. We just need to ensure that the `adapter_names` argument is aligned correctly with the samples.
371
+
372
+ ### Caveats
373
+
374
+ Using this features has some drawbacks, namely:
375
+
376
+ - It only works for inference, not for training.
377
+ - Disabling adapters using the `with model.disable_adapter()` context takes precedence over `adapter_names`.
378
+ - You cannot pass `adapter_names` when some adapter weights where merged with base weight using the `merge_adapter` method. Please unmerge all adapters first by calling `model.unmerge_adapter()`.
379
+ - For obvious reasons, this cannot be used after calling `merge_and_unload()`, since all the LoRA adapters will be merged into the base weights in this case.
380
+ - This feature does not currently work with DoRA, so set `use_dora=False` in your `LoraConfig` if you want to use it.
381
+ - There is an expected overhead for inference with `adapter_names`, especially if the amount of different adapters in the batch is high. This is because the batch size is effectively reduced to the number of samples per adapter. If runtime performance is your top priority, try the following:
382
+ - Increase the batch size.
383
+ - Try to avoid having a large number of different adapters in the same batch, prefer homogeneous batches. This can be achieved by buffering samples with the same adapter and only perform inference with a small handfull of different adapters.
384
+ - Take a look at alternative implementations such as [LoRAX](https://github.com/predibase/lorax), [punica](https://github.com/punica-ai/punica), or [S-LoRA](https://github.com/S-LoRA/S-LoRA), which are specialized to work with a large number of different adapters.