LoRA
LoRA is low-rank decomposition method to reduce the number of trainable parameters which speeds up finetuning large models and uses less memory. In PEFT, using LoRA is as easy as setting up a LoraConfig and wrapping it with get_peft_model() to create a trainable PeftModel.
This guide explores in more detail other options and features for using LoRA.
Initialization
The initialization of LoRA weights is controlled by the parameter init_lora_weights
in LoraConfig. By default, PEFT initializes LoRA weights with Kaiming-uniform for weight A and zeros for weight B resulting in an identity transform (same as the reference implementation).
It is also possible to pass init_lora_weights="gaussian"
. As the name suggests, this initializes weight A with a Gaussian distribution and zeros for weight B (this is how Diffusers initializes LoRA weights).
from peft import LoraConfig
config = LoraConfig(init_lora_weights="gaussian", ...)
There is also an option to set init_lora_weights=False
which is useful for debugging and testing. This should be the only time you use this option. When choosing this option, the LoRA weights are initialized such that they do not result in an identity transform.
from peft import LoraConfig
config = LoraConfig(init_lora_weights=False, ...)
PiSSA
PiSSA initializes the LoRA adapter using the principal singular values and singular vectors. This straightforward modification allows PiSSA to converge more rapidly than LoRA and ultimately attain superior performance. Moreover, PiSSA reduces the quantization error compared to QLoRA, leading to further enhancements.
Configure the initialization method to “pissa”, which may take several minutes to execute SVD on the pre-trained model:
from peft import LoraConfig
config = LoraConfig(init_lora_weights="pissa", ...)
Alternatively, execute fast SVD, which takes only a few seconds. The number of iterations determines the trade-off between the error and computation time:
lora_config = LoraConfig(init_lora_weights="pissa_niter_[number of iters]", ...)
For detailed instruction on using PiSSA, please follow these instructions.
LoftQ
Standard approach
When quantizing the base model for QLoRA training, consider using the LoftQ initialization, which has been shown to improve performance when training quantized models. The idea is that the LoRA weights are initialized such that the quantization error is minimized. To use LoftQ, follow these instructions.
In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing LoraConfig(..., target_modules="all-linear")
will most likely give the best results. Also, you should use nf4
as quant type in your quantization config when using 4bit quantization, i.e. BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")
.
A more convienient way
An easier but more limited way to apply LoftQ initialization is to use the convenience function replace_lora_weights_loftq
. This takes the quantized PEFT model as input and replaces the LoRA weights in-place with their LoftQ-initialized counterparts.
from peft import replace_lora_weights_loftq
from transformers import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True, ...)
base_model = AutoModelForCausalLM.from_pretrained(..., quantization_config=bnb_config)
# note: don't pass init_lora_weights="loftq" or loftq_config!
lora_config = LoraConfig(task_type="CAUSAL_LM")
peft_model = get_peft_model(base_model, lora_config)
replace_lora_weights_loftq(peft_model)
replace_lora_weights_loftq
also allows you to pass a callback
argument to give you more control over which layers should be modified or not, which empirically can improve the results quite a lot. To see a more elaborate example of this, check out this notebook.
replace_lora_weights_loftq
implements only one iteration step of LoftQ. This means that only the LoRA weights are updated, instead of iteratevily updating LoRA weights and quantized base model weights. This may lead to lower performance but has the advantage that we can use the original quantized weights derived from the base model, instead of having to keep an extra copy of modified quantized weights. Whether this tradeoff is worthwhile depends on the use case.
At the moment, replace_lora_weights_loftq
has these additional limitations:
- Model files must be stored as a
safetensors
file. - Only bitsandbytes 4bit quantization is supported.
Learn more about how PEFT works with quantization in the Quantization guide.
Rank-stabilized LoRA
Another way to initialize LoraConfig is with the rank-stabilized LoRA (rsLoRA) method. The LoRA architecture scales each adapter during every forward pass by a fixed scalar which is set at initialization and depends on the rank r
. The scalar is given by lora_alpha/r
in the original implementation, but rsLoRA uses lora_alpha/math.sqrt(r)
which stabilizes the adapters and increases the performance potential from using a higher r
.
from peft import LoraConfig
config = LoraConfig(use_rslora=True, ...)
Weight-Decomposed Low-Rank Adaptation (DoRA)
This technique decomposes the updates of the weights into two parts, magnitude and direction. Direction is handled by normal LoRA, whereas the magnitude is handled by a separate learnable parameter. This can improve the performance of LoRA, especially at low ranks. For more information on DoRA, see https://arxiv.org/abs/2402.09353.
from peft import LoraConfig
config = LoraConfig(use_dora=True, ...)
Caveats
- DoRA only supports linear and Conv2d layers at the momement.
- DoRA introduces a bigger overhead than pure LoRA, so it is recommended to merge weights for inference, see LoraModel.merge_and_unload().
- DoRA should work with weights quantized with bitsandbytes (“QDoRA”). However, issues have been reported when using QDoRA with DeepSpeed Zero2.
QLoRA-style training
The default LoRA settings in PEFT add trainable weights to the query and value layers of each attention block. But QLoRA, which adds trainable weights to all the linear layers of a transformer model, can provide performance equal to a fully finetuned model. To apply LoRA to all the linear layers, like in QLoRA, set target_modules="all-linear"
(easier than specifying individual modules by name which can vary depending on the architecture).
config = LoraConfig(target_modules="all-linear", ...)
Memory efficient Layer Replication with LoRA
An approach used to improve the performance of models is to expand a model by duplicating layers in the model to build a larger model from a pretrained model of a given size. For example increasing a 7B model to a 10B model as described in the SOLAR paper. PEFT LoRA supports this kind of expansion in a memory efficient manner that supports further fine-tuning using LoRA adapters attached to the layers post replication of the layers. The replicated layers do not take additional memory as they share the underlying weights so the only additional memory required is the memory for the adapter weights. To use this feature you would create a config with the layer_replication
argument.
config = LoraConfig(layer_replication=[[0,4], [2,5]], ...)
Assuming the original model had 5 layers [0, 1, 2 ,3, 4]
, this would create a model with 7 layers arranged as [0, 1, 2, 3, 2, 3, 4]
. This follows the mergekit pass through merge convention where sequences of layers specified as start inclusive and end exclusive tuples are stacked to build the final model. Each layer in the final model gets its own distinct set of LoRA adpaters.
Fewshot-Metamath-OrcaVicuna-Mistral-10B is an example of a model trained using this method on Mistral-7B expanded to 10B. The adapter_config.json shows a sample LoRA adapter config applying this method for fine-tuning.
Merge adapters
While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA adapter. To eliminate latency, use the merge_and_unload() function to merge the adapter weights with the base model. This allows you to use the newly merged model as a standalone model. The merge_and_unload() function doesn’t keep the adapter weights in memory.
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
model.merge_and_unload()
If you need to keep a copy of the weights so you can unmerge the adapter later or delete and load different ones, you should use the merge_adapter() function instead. Now you have the option to use unmerge_adapter() to return the base model.
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
model.merge_adapter()
# unmerge the LoRA layers from the base model
model.unmerge_adapter()
The add_weighted_adapter() function is useful for merging multiple LoRAs into a new adapter based on a user provided weighting scheme in the weights
parameter. Below is an end-to-end example.
First load the base model:
from transformers import AutoModelForCausalLM
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, device_map="auto"
)
Then we load the first adapter:
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id, adapter_name="sft")
Then load a different adapter and merge it with the first one:
weighted_adapter_name = "sft-dpo"
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
model.add_weighted_adapter(
adapters=["sft", "dpo"],
weights=[0.7, 0.3],
adapter_name=weighted_adapter_name,
combination_type="linear"
)
model.set_adapter(weighted_adapter_name)
There are several supported methods for combination_type
. Refer to the documentation for more details. Note that “svd” as the combination_type
is not supported when using torch.float16
or torch.bfloat16
as the datatype.
Now, perform inference:
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
with torch.no_grad():
generate_ids = model.generate(**inputs, max_length=30)
outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(outputs)
Load adapters
Adapters can be loaded onto a pretrained model with load_adapter(), which is useful for trying out different adapters whose weights aren’t merged. Set the active adapter weights with the set_adapter() function.
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
peft_model_id = "alignment-handbook/zephyr-7b-sft-lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
# load different adapter
model.load_adapter("alignment-handbook/zephyr-7b-dpo-lora", adapter_name="dpo")
# set adapter as active
model.set_adapter("dpo")
To return the base model, you could use unload() to unload all of the LoRA modules or delete_adapter() to delete the adapter entirely.
# unload adapter
model.unload()
# delete adapter
model.delete_adapter("dpo")
Inference with different LoRA adapters in the same batch
Normally, each inference batch has to use the same adapter(s) in PEFT. This can sometimes be annoying, because we may have batches that contain samples intended to be used with different LoRA adapters. For example, we could have a base model that works well in English and two more LoRA adapters, one for French and one for German. Usually, we would have to split our batches such that each batch only contains samples of one of the languages, we cannot combine different languages in the same batch.
Thankfully, it is possible to mix different LoRA adapters in the same batch using the adapter_name
argument. Below, we show an examle of how this works in practice. First, let’s load the base model, English, and the two adapters, French and German, like this:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
model_id = ...
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# load the LoRA adapter for French
peft_model = PeftModel.from_pretrained(model, <path>, adapter_name="adapter_fr")
# next, load the LoRA adapter for German
peft_model.load_adapter(<path>, adapter_name="adapter_de")
Now, we want to generate text on a sample that contains all three languages: The first three samples are in English, the next three are in French, and the last three are in German. We can use the adapter_names
argument to specify which adapter to use for each sample. Since our base model is used for English, we use the special string "__base__"
for these samples. For the next three samples, we indicate the adapter name of the French LoRA fine-tune, in this case "adapter_fr"
. For the last three samples, we indicate the adapter name of the German LoRA fine-tune, in this case "adapter_de"
. This way, we can use the base model and the two adapters in a single batch.
inputs = tokenizer(
[
"Hello, my dog is cute",
"Hello, my cat is awesome",
"Hello, my fish is great",
"Salut, mon chien est mignon",
"Salut, mon chat est génial",
"Salut, mon poisson est super",
"Hallo, mein Hund ist süß",
"Hallo, meine Katze ist toll",
"Hallo, mein Fisch ist großartig",
],
return_tensors="pt",
padding=True,
)
adapter_names = [
"__base__", "__base__", "__base__",
"adapter_fr", "adapter_fr", "adapter_fr",
"adapter_de", "adapter_de", "adapter_de",
]
output = peft_model.generate(**inputs, adapter_names=adapter_names, max_new_tokens=20)
Note that the order does not matter here, i.e. the samples in the batch don’t need to be grouped by adapter as in the example above. We just need to ensure that the adapter_names
argument is aligned correctly with the samples.
Caveats
Using this features has some drawbacks, namely:
- It only works for inference, not for training.
- Disabling adapters using the
with model.disable_adapter()
context takes precedence overadapter_names
. - You cannot pass
adapter_names
when some adapter weights where merged with base weight using themerge_adapter
method. Please unmerge all adapters first by callingmodel.unmerge_adapter()
. - For obvious reasons, this cannot be used after calling
merge_and_unload()
, since all the LoRA adapters will be merged into the base weights in this case. - This feature does not currently work with DoRA, so set
use_dora=False
in yourLoraConfig
if you want to use it. - There is an expected overhead for inference with
adapter_names
, especially if the amount of different adapters in the batch is high. This is because the batch size is effectively reduced to the number of samples per adapter. If runtime performance is your top priority, try the following:- Increase the batch size.
- Try to avoid having a large number of different adapters in the same batch, prefer homogeneous batches. This can be achieved by buffering samples with the same adapter and only perform inference with a small handfull of different adapters.
- Take a look at alternative implementations such as LoRAX, punica, or S-LoRA, which are specialized to work with a large number of different adapters.