PEFT documentation

Tuners

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.14.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Tuners

A tuner (or adapter) is a module that can be plugged into a torch.nn.Module. BaseTuner base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. BaseTunerLayer is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters.

BaseTuner

class peft.tuners.tuners_utils.BaseTuner

< >

( model peft_config: Union[PeftConfig, dict[str, PeftConfig]] adapter_name: str low_cpu_mem_usage: bool = False )

Parameters

  • model (torch.nn.Module) — The model to which the adapter tuner layers will be attached.
  • forward (Callable) — The forward method of the model.
  • peft_config (Union[PeftConfig, dict[str, PeftConfig]]) — The adapter configuration object, it should be a dictionary of str to PeftConfig objects. One can also pass a PeftConfig object and a new adapter will be created with the default name adapter or create a new dictionary with a key adapter_name and a value of that peft config.
  • config (dict[str, Any]) — The model configuration object, it should be a dictionary of str to Any objects.
  • targeted_module_names (list[str]) — The list of module names that were actually adapted. Can be useful to inspect if you want to quickly double-check that the config.target_modules were specified correctly.

A base tuner model that provides the common methods and attributes for all tuners that are injectable into a torch.nn.Module

For adding a new Tuner class, one needs to overwrite the following methods:

  • _prepare_adapter_config: A private method to eventually prepare the adapter config, for example in case the field target_modules is missing.
  • _create_and_replace: A private method to create and replace the target module with the adapter module.
  • _check_target_module_exists: A private helper method to check if the passed module’s key name matches any of the target modules in the adapter_config.

The easiest is to check what is done in the peft.tuners.lora.LoraModel class.

disable_adapter_layers

< >

( )

Disable all adapters in-place.

enable_adapter_layers

< >

( )

Enable all adapters in-place

get_model_config

< >

( model: nn.Module )

Parameters

  • model (nn.Module) — Model to get the config from.
  • default (dict|None, optional) —: What to return if model does not have a config attribute.

This method gets the config from a model in dictionary form. If model has not attribute config, then this method returns a default config.

inject_adapter

< >

( model: nn.Module adapter_name: str autocast_adapter_dtype: bool = True low_cpu_mem_usage: bool = False )

Parameters

  • model (nn.Module) — The model to be tuned.
  • adapter_name (str) — The adapter name.
  • autocast_adapter_dtype (bool, optional) — Whether to autocast the adapter dtype. Defaults to True.
  • low_cpu_mem_usage (bool, optional, defaults to False) — Create empty adapter weights on meta device. Useful to speed up the loading process.

Creates adapter layers and replaces the target modules with the adapter layers. This method is called under the hood by peft.mapping.get_peft_model if a non-prompt tuning adapter class is passed.

The corresponding PEFT config is directly retrieved from the peft_config attribute of the BaseTuner class.

merge_adapter

< >

( adapter_names: Optional[list[str]] = None )

Parameters

  • safe_merge (bool, optional) — If True, the merge operation will be performed in a copy of the original weights and check for NaNs before merging the weights. This is useful if you want to check if the merge operation will produce NaNs. Defaults to False.
  • adapter_names (list[str], optional) — The list of adapter names that should be merged. If None, all active adapters will be merged. Defaults to None.

This method merges the adapter layers into the base model.

Merging adapters can lead to a speed up of the forward pass. A copy of the adapter weights is still kept in memory, which is required to unmerge the adapters. In order to merge the adapter weights without keeping them in memory, please call merge_and_unload.

unmerge_adapter

< >

( )

This method unmerges all merged adapter layers from the base model.

BaseTunerLayer

class peft.tuners.tuners_utils.BaseTunerLayer

< >

( )

Parameters

  • is_pluggable (bool, optional) — Whether the adapter layer can be plugged to any pytorch module
  • active_adapters (Union[Liststr, str], optional) — The name of the active adapter.

A tuner layer mixin that provides the common methods and attributes for all tuners.

delete_adapter

< >

( adapter_name: str )

Parameters

  • adapter_name (str) — The name of the adapter to delete

Delete an adapter from the layer

This should be called on all adapter layers, or else we will get an inconsistent state.

This method will also set a new active adapter if the deleted adapter was an active adapter. It is important that the new adapter is chosen in a deterministic way, so that the same adapter is chosen on all layers.

enable_adapters

< >

( enabled: bool )

Parameters

  • enabled (bool) — True to enable adapters, False to disable adapters

Toggle the enabling and disabling of adapters

Takes care of setting the requires_grad flag for the adapter weights.

get_base_layer

< >

( )

(Recursively) get the base_layer.

This is necessary for the case that the tuner layer wraps another tuner layer.

set_adapter

< >

( adapter_names: str | list[str] )

Parameters

  • adapter_name (str or List[str]) — Name of the adapter(s) to be activated.

Set the active adapter(s).

Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code.

>>> for name, param in model_peft.named_parameters():
...     if ...:  # some check on name (ex. if 'lora' in name)
...         param.requires_grad = False
< > Update on GitHub