PEFT documentation

Multitask prompt tuning

You are viewing v0.11.0 version. A newer version v0.14.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Multitask prompt tuning

Multitask prompt tuning decomposes the soft prompts of each task into a single learned transferable prompt instead of a separate prompt for each task. The single learned prompt can be adapted for each task by multiplicative low rank updates.

The abstract from the paper is:

Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters.

MultitaskPromptTuningConfig

class peft.MultitaskPromptTuningConfig

< >

( peft_type: Union = None auto_mapping: Optional = None base_model_name_or_path: Optional = None revision: Optional = None task_type: Union = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: Optional = None num_attention_heads: Optional = None num_layers: Optional = None prompt_tuning_init: Union = <MultitaskPromptTuningInit.RANDOM: 'RANDOM'> prompt_tuning_init_text: Optional = None tokenizer_name_or_path: Optional = None tokenizer_kwargs: Optional = None prompt_tuning_init_state_dict_path: Optional = None prompt_tuning_init_task: Optional = 0 num_ranks: Optional = 1 num_tasks: Optional = 1 )

MultitaskPromptEmbedding

class peft.tuners.MultitaskPromptEmbedding

< >

( config: MultitaskPromptTuningConfig word_embeddings )

< > Update on GitHub