PEFT documentation

LayerNorm Tuning

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.11.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

LayerNorm Tuning

LayerNorm Tuning (LN Tuning) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model. The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage. However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers. In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as MLP or Attention layers, this can be done by specifying the target_modules in the LNTuningConfig.

The abstract from the paper is:

This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model.

LNTuningConfig

class peft.LNTuningConfig

< >

( peft_type: Union = None auto_mapping: Optional = None base_model_name_or_path: Optional = None revision: Optional = None task_type: Union = None inference_mode: bool = False target_modules: Optional[Union[list[str], str]] = None modules_to_save: Optional[Union[list[str], str]] = None )

Parameters

  • target_modules (Optional[Union[List[str], str]]) — List of module names or regex expression of the module names to replace with LNTuning. For example, ‘.decoder.’ or ‘.encoder.’. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually.
  • modules_to_save (Optional[Union[List[str], str]]) — List of modules to be set as trainable and saved in the final checkpoint. For example, in Sequence Classification or Token Classification tasks, the final layer classifier/score are randomly initialized and as such need to be trainable and saved.

This is the configuration class to store the configuration of a LNTuningModel.

LNTuningModel

class peft.LNTuningModel

< >

( model config adapter_name ) ‘torch.nn.Module’

Parameters

  • model (torch.nn.Module) — The model to be adapted.
  • config (LNTuningConfig) — The configuration of the Lora model.
  • adapter_name (str) — The name of the adapter, defaults to "default".

Returns

‘torch.nn.Module’

The adapted model with LayerNorm tuned on.

Creates LayerNorm tuning from a pretrained transformer model.

The method is described in detail in https://arxiv.org/abs/2312.11420.

Example:

>>> from transformers import AutoModelForCausalLM
>>> from peft import get_peft_model, TaskType, LNTuningConfig

>>> peft_config = LNTuningConfig(
...     task_type=TaskType.CAUSAL_LM,
... )

>>> model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
>>> model = get_peft_model(model, peft_config)
>>> model.print_trainable_parameters()

Attributes:

disable_adapter_layers

< >

( )

Disable all adapters.

When disabling all adapters, the model output corresponds to the output of the base model.

enable_adapter_layers

< >

( )

Enable all adapters.

Call this if you have previously disabled all adapters and want to re-enable them.

< > Update on GitHub