PEFT documentation

Mixed adapter types

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.14.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Mixed adapter types

Normally, it isn’t possible to mix different adapter types in 🤗 PEFT. You can create a PEFT model with two different LoRA adapters (which can have different config options), but it is not possible to combine a LoRA and LoHa adapter. With PeftMixedModel however, this works as long as the adapter types are compatible. The main purpose of allowing mixed adapter types is to combine trained adapters for inference. While it is possible to train a mixed adapter model, this has not been tested and is not recommended.

To load different adapter types into a PEFT model, use PeftMixedModel instead of PeftModel:

from peft import PeftMixedModel

base_model = ...  # load the base model, e.g. from transformers
# load first adapter, which will be called "default"
peft_model = PeftMixedModel.from_pretrained(base_model, <path_to_adapter1>)
peft_model.load_adapter(<path_to_adapter2>, adapter_name="other")
peft_model.set_adapter(["default", "other"])

The set_adapter() method is necessary to activate both adapters, otherwise only the first adapter would be active. You can keep adding more adapters by calling add_adapter() repeatedly.

PeftMixedModel does not support saving and loading mixed adapters. The adapters should already be trained, and loading the model requires a script to be run each time.

Tips

  • Not all adapter types can be combined. See peft.tuners.mixed.COMPATIBLE_TUNER_TYPES for a list of compatible types. An error will be raised if you try to combine incompatible adapter types.
  • It is possible to mix multiple adapters of the same type which can be useful for combining adapters with very different configs.
  • If you want to combine a lot of different adapters, the most performant way to do it is to consecutively add the same adapter types. For example, add LoRA1, LoRA2, LoHa1, LoHa2 in this order, instead of LoRA1, LoHa1, LoRA2, and LoHa2. While the order can affect the output, there is no inherently best order, so it is best to choose the fastest one.
< > Update on GitHub