IA3
Infused Adapter by Inhibiting and Amplifying Inner Activations, or IA3, is a method that adds three learned vectors to rescale the keys and values of the self-attention and encoder-decoder attention layers, and the intermediate activation of the position-wise feed-forward network.
The abstract from the paper is:
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available.
IA3Config
class peft.IA3Config
< source >( peft_type: Union = None auto_mapping: Optional = None base_model_name_or_path: Optional = None revision: Optional = None task_type: Union = None inference_mode: bool = False target_modules: Optional[Union[list[str], str]] = None exclude_modules: Optional[Union[list[str], str]] = None feedforward_modules: Optional[Union[list[str], str]] = None fan_in_fan_out: bool = False modules_to_save: Optional[list[str]] = None init_ia3_weights: bool = True )
Parameters
- target_modules (
Optional[Union[List[str], str]]
) — The names of the modules to apply the adapter to. If this is specified, only the modules with the specified names will be replaced. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. If this is specified as ‘all-linear’, then all linear/Conv1D modules are chosen, excluding the output layer. If this is not specified, modules will be chosen according to the model architecture. If the architecture is not known, an error will be raised — in this case, you should specify the target modules manually. - exclude_modules (
Optional[Union[List[str], str]]
) — The names of the modules to not apply the adapter. When passing a string, a regex match will be performed. When passing a list of strings, either an exact match will be performed or it is checked if the name of the module ends with any of the passed strings. - feedforward_modules (
Optional[Union[List[str], str]]
) — The names of the modules to be treated as feedforward modules, as in the original paper. These modules will have (IA)³ vectors multiplied to the input, instead of the output.feedforward_modules
must be a name or a subset of names present intarget_modules
. - fan_in_fan_out (
bool
) — Set this to True if the layer to replace stores weight like (fan_in, fan_out). For example, gpt-2 usesConv1D
which stores weights like (fan_in, fan_out) and hence this should be set toTrue
. - modules_to_save (
Optional[List[str]]
) — List of modules apart from (IA)³ layers to be set as trainable and saved in the final checkpoint. - init_ia3_weights (
bool
) — Whether to initialize the vectors in the (IA)³ layers, defaults toTrue
. Setting this toFalse
is discouraged.
This is the configuration class to store the configuration of a IA3Model.
IA3Model
class peft.IA3Model
< source >( model config adapter_name low_cpu_mem_usage: bool = False ) → torch.nn.Module
Parameters
- model (PreTrainedModel) — The model to be adapted.
- config (IA3Config) — The configuration of the (IA)^3 model.
- adapter_name (
str
) — The name of the adapter, defaults to"default"
. - low_cpu_mem_usage (
bool
,optional
, defaults toFalse
) — Create empty adapter weights on meta device. Useful to speed up the loading process.
Returns
torch.nn.Module
The (IA)^3 model.
Creates a Infused Adapter by Inhibiting and Amplifying Inner Activations ((IA)^3) model from a pretrained transformers model. The method is described in detail in https://arxiv.org/abs/2205.05638
Example:
>>> from transformers import AutoModelForSeq2SeqLM, ia3Config
>>> from peft import IA3Model, IA3Config
>>> config = IA3Config(
... peft_type="IA3",
... task_type="SEQ_2_SEQ_LM",
... target_modules=["k", "v", "w0"],
... feedforward_modules=["w0"],
... )
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> ia3_model = IA3Model(config, model)
Attributes:
- model (PreTrainedModel) — The model to be adapted.
- peft_config (
ia3Config
): The configuration of the (IA)^3 model.
add_weighted_adapter
< source >( adapters: list[str] weights: list[float] adapter_name: str )
This method adds a new adapter by merging the given adapters with the given weights.
delete_adapter
< source >( adapter_name: str )
Deletes an existing adapter.
Disable all adapters.
When disabling all adapters, the model output corresponds to the output of the base model.
Enable all adapters.
Call this if you have previously disabled all adapters and want to re-enable them.
merge_and_unload
< source >( safe_merge: bool = False adapter_names: Optional[list[str]] = None )
This method merges the IA³ layers into the base model. This is needed if someone wants to use the base model as a standalone model.
Example:
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModel
>>> base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b")
>>> peft_model_id = "smangrul/falcon-40B-int4-peft-lora-sfttrainer-sample"
>>> model = PeftModel.from_pretrained(base_model, peft_model_id)
>>> merged_model = model.merge_and_unload()
set_adapter
< source >( adapter_name: str | list[str] )
Set the active adapter(s).
Additionally, this function will set the specified adapters to trainable (i.e., requires_grad=True). If this is not desired, use the following code.
Gets back the base model by removing all the IA³ modules without merging. This gives back the original base model.