P-tuning
P-tuning adds trainable prompt embeddings to the input that is optimized by a prompt encoder to find a better prompt, eliminating the need to manually design prompts. The prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance.
The abstract from the paper is:
While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning β which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTsβ performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark..
PromptEncoderConfig
class peft.PromptEncoderConfig
< source >( peft_type: Union = Noneauto_mapping: Optional = Nonebase_model_name_or_path: Optional = Nonerevision: Optional = Nonetask_type: Union = Noneinference_mode: bool = Falsenum_virtual_tokens: int = Nonetoken_dim: int = Nonenum_transformer_submodules: Optional = Nonenum_attention_heads: Optional = Nonenum_layers: Optional = Noneencoder_reparameterization_type: Union = <PromptEncoderReparameterizationType.MLP: 'MLP'>encoder_hidden_size: int = Noneencoder_num_layers: int = 2encoder_dropout: float = 0.0 )
Parameters
- encoder_reparameterization_type (Union[
PromptEncoderReparameterizationType
,str
]) β The type of reparameterization to use. - encoder_hidden_size (
int
) β The hidden size of the prompt encoder. - encoder_num_layers (
int
) β The number of layers of the prompt encoder. - encoder_dropout (
float
) β The dropout probability of the prompt encoder.
This is the configuration class to store the configuration of a PromptEncoder.
PromptEncoder
class peft.PromptEncoder
< source >( config )
Parameters
- config (PromptEncoderConfig) β The configuration of the prompt encoder.
The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.
Example:
>>> from peft import PromptEncoder, PromptEncoderConfig
>>> config = PromptEncoderConfig(
... peft_type="P_TUNING",
... task_type="SEQ_2_SEQ_LM",
... num_virtual_tokens=20,
... token_dim=768,
... num_transformer_submodules=1,
... num_attention_heads=12,
... num_layers=12,
... encoder_reparameterization_type="MLP",
... encoder_hidden_size=768,
... )
>>> prompt_encoder = PromptEncoder(config)
Attributes:
- embedding (
torch.nn.Embedding
) β The embedding layer of the prompt encoder. - mlp_head (
torch.nn.Sequential
) β The MLP head of the prompt encoder ifinference_mode=False
. - lstm_head (
torch.nn.LSTM
) β The LSTM head of the prompt encoder ifinference_mode=False
andencoder_reparameterization_type="LSTM"
. - token_dim (
int
) β The hidden embedding dimension of the base transformer model. - input_size (
int
) β The input size of the prompt encoder. - output_size (
int
) β The output size of the prompt encoder. - hidden_size (
int
) β The hidden size of the prompt encoder. - total_virtual_tokens (
int
): The total number of virtual tokens of the prompt encoder. - encoder_type (Union[
PromptEncoderReparameterizationType
,str
]): The encoder type of the prompt encoder.
Input shape: (batch_size
, total_virtual_tokens
)
Output shape: (batch_size
, total_virtual_tokens
, token_dim
)