chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_1
|
Models
PeftModel is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel contains methods for loading and saving models from the Hub, and supports the PromptEncoder for prompt learning.
PeftModel
class peft.PeftModel
<
source
>
(
model: PreTrainedModel
peft_config: PeftConfig
adapter_name: str = 'default'
)
Parameters
model (PreTrainedModel) — The base transfo
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_2
|
eTrainedModel
peft_config: PeftConfig
adapter_name: str = 'default'
)
Parameters
model (PreTrainedModel) — The base transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
Base model encompassing various Peft methods.
Attributes:
base_model (PreTrainedModel) — The base transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
modules_to_save (list of str)
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_3
|
transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
modules_to_save (list of str) — The list of sub-module names to save when
saving the model.
prompt_encoder (PromptEncoder) — The prompt encoder used for Peft if
using PromptLearningConfig.
prompt_tokens (torch.Tensor) — The virtual prompt tokens used for Peft if
using PromptLearningConfig.
transformer_backbone_name (str) — The name of the transform
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_4
|
e virtual prompt tokens used for Peft if
using PromptLearningConfig.
transformer_backbone_name (str) — The name of the transformer
backbone in the base model if using PromptLearningConfig.
word_embeddings (torch.nn.Embedding) — The word embeddings of the transformer backbone
in the base model if using PromptLearningConfig.
create_or_update_model_card
<
source
>
(
output_dir: str
)
Updates or create model card to include information about
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_5
|
fig.
create_or_update_model_card
<
source
>
(
output_dir: str
)
Updates or create model card to include information about peft:
Adds peft library tag
Adds peft version
Adds quantization information if it was used
disable_adapter
<
source
>
(
)
Disables the adapter module.
forward
<
source
>
(
*args: Any
**kwargs: Any
)
Forward pass of the model.
from_pretrained
<
source
>
(
model: PreTrainedModel
model_id: Union[str, os.PathL
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_6
|
kwargs: Any
)
Forward pass of the model.
from_pretrained
<
source
>
(
model: PreTrainedModel
model_id: Union[str, os.PathLike]
adapter_name: str = 'default'
is_trainable: bool = False
config: Optional[PeftConfig] = None
**kwargs: Any
)
Parameters
model (PreTrainedModel) —
The model to be adapted. The model should be initialized with the
from_pretrained method from the 🤗 Transformers library.
model_id (str or os.PathLike) —
The name
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_7
|
hould be initialized with the
from_pretrained method from the 🤗 Transformers library.
model_id (str or os.PathLike) —
The name of the Lora configuration to use. Can be either:
A string, the model id of a Lora configuration hosted inside a model repo on the Hugging Face
Hub.
A path to a directory containing a Lora configuration file saved using the save_pretrained
method (./my_lora_config_directory/).
adapter_name (str, optional, defaults t
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_8
|
figuration file saved using the save_pretrained
method (./my_lora_config_directory/).
adapter_name (str, optional, defaults to "default") —
The name of the adapter to be loaded. This is useful for loading multiple adapters.
is_trainable (bool, optional, defaults to False) —
Whether the adapter should be trainable or not. If False, the adapter will be frozen and use for
inference
config (PeftConfig, optional) —
The configuration object to
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_9
|
or not. If False, the adapter will be frozen and use for
inference
config (PeftConfig, optional) —
The configuration object to use instead of an automatically loaded configuation. This configuration
object is mutually exclusive with model_id and kwargs. This is useful when configuration is already
loaded before calling from_pretrained.
kwargs — (optional):
Additional keyword arguments passed along to the specific Lora configuration class.
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_10
|
ng from_pretrained.
kwargs — (optional):
Additional keyword arguments passed along to the specific Lora configuration class.
Instantiate a LoraModel from a pretrained Lora configuration and weights.
get_base_model
<
source
>
(
)
Returns the base model.
get_nb_trainable_parameters
<
source
>
(
)
Returns the number of trainable parameters and number of all parameters in the model.
get_prompt
<
source
>
(
batch_size: int
)
Retur
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_11
|
number of trainable parameters and number of all parameters in the model.
get_prompt
<
source
>
(
batch_size: int
)
Returns the virtual prompts to use for Peft. Only applicable when peft_config.peft_type != PeftType.LORA.
get_prompt_embedding_to_save
<
source
>
(
adapter_name: str
)
Returns the prompt embedding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA.
print_trainable_parameters
<
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_12
|
dding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA.
print_trainable_parameters
<
source
>
(
)
Prints the number of trainable parameters in the model.
save_pretrained
<
source
>
(
save_directory: str
safe_serialization: bool = False
selected_adapters: Optional[List[str]] = None
**kwargs: Any
)
Parameters
save_directory (str) —
Directory where the adapter model and configuration files will
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_13
|
r]] = None
**kwargs: Any
)
Parameters
save_directory (str) —
Directory where the adapter model and configuration files will be saved (will be created if it does not
exist).
kwargs (additional keyword arguments, optional) —
Additional keyword arguments passed along to the push_to_hub method.
This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the LoraModel.from_pret
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_14
|
s the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the LoraModel.from_pretrained class method, and also used by the LoraModel.push_to_hub
method.
set_adapter
<
source
>
(
adapter_name: str
)
Sets the active adapter.
PeftModelForSequenceClassification
A PeftModel for sequence classification tasks.
class peft.PeftModelForSequenceClassification
<
source
>
(
model
peft_config: PeftCon
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_15
|
Model for sequence classification tasks.
class peft.PeftModelForSequenceClassification
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for sequence classification tasks.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classific
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_16
|
ibutes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 2
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_17
|
"peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = Aut
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_18
|
n": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForTokenClassification
A PeftModel for to
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_19
|
params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForTokenClassification
A PeftModel for token classification tasks.
class peft.PeftModelForTokenClassification
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for token classification tasks.
Attributes:
config
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_20
|
se transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for token classification tasks.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "P
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_21
|
enceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "TOKEN_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
...
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_22
|
num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || al
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_23
|
= PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForCausalLM
A PeftModel for causal language modeling.
class peft.PeftModelForCausalLM
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — P
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_24
|
nfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for causal language modeling.
Example:
Copied
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_v
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_25
|
nfig = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 1280,
... "num_transformer_submodules": 1,
... "num_attention_heads": 20,
... "num_layers": 36,
... "encoder_hidden_size": 1280,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(conf
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_26
|
"prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
PeftModelForSeq2SeqLM
A PeftModel for sequence-to-sequence lan
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_27
|
00 || all params: 775873280 || trainable%: 0.23756456724479544
PeftModelForSeq2SeqLM
A PeftModel for sequence-to-sequence language modeling.
class peft.PeftModelForSeq2SeqLM
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for sequence-to-sequence language modeling.
Example:
Copied
>>> from tran
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_28
|
t_config (PeftConfig) — Peft config.
Peft model for sequence-to-sequence language modeling.
Example:
Copied
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "SEQ_2_SEQ_LM",
... "inference_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_29
|
erence_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1,
... "fan_in_fan_out": False,
... "enable_lora": None,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 8
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_30
|
ase")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566
PeftModelForQuestionAnswering
A PeftModel for question answering.
class peft.PeftModelForQuestionAnswering
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_31
|
del
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for extractive question answering.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForQuestionAnswering
>>> f
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_32
|
tr) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "QUESTION_ANS",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_33
|
6,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_34
|
stionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013
PeftModelForFeatureExtraction
A PeftModel for getting extracting features/embeddings from transformer models.
class peft.PeftModelForFeatureExtraction
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base trans
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_35
|
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for extracting features/embeddings from transformer models
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
Example:
Copied
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtracti
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_36
|
t of the base model.
Example:
Copied
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "FEATURE_EXTRACTION",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
...
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_37
|
, "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
8e5134d019b488bf35b9df834d5cbf51.txt_chunk_38
|
ainable_parameters()
|
8e5134d019b488bf35b9df834d5cbf51.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_1
|
PEFT
🤗 PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters.
PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly.
Recent state-of-the-art PEFT techniques ach
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_2
|
onal and storage costs because fine-tuning large-scale PLMs is prohibitively costly.
Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.
PEFT is seamlessly integrated with 🤗 Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.
Get started
Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.
How
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_3
|
here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.
How-to guides
Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.
Conceptual guides
Get a better theo
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_4
|
nd more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.
Conceptual guides
Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.
Reference
Technical descriptions of how 🤗 PEFT classes and methods work.
Supported methods
LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Prefix Tuning: Prefi
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_5
|
EFT classes and methods work.
Supported methods
LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
P-Tuning: GPT Understands, Too
Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_6
|
ning: The Power of Scale for Parameter-Efficient Prompt Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
IA3: Infused Adapter by Inhibiting and Amplifying Inner Activations
Supported models
The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for
a task, please refer t
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_7
|
ded below list the PEFT methods and models supported for each task. To apply a particular PEFT method for
a task, please refer to the corresponding Task guides.
Causal Language Modeling
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
GPT-2
✅
✅
✅
✅
✅
Bloom
✅
✅
✅
✅
✅
OPT
✅
✅
✅
✅
✅
GPT-Neo
✅
✅
✅
✅
✅
GPT-J
✅
✅
✅
✅
✅
GPT-NeoX-20B
✅
✅
✅
✅
✅
LLaMA
✅
✅
✅
✅
✅
ChatGLM
✅
✅
✅
✅
✅
Conditional Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tun
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_8
|
GPT-NeoX-20B
✅
✅
✅
✅
✅
LLaMA
✅
✅
✅
✅
✅
ChatGLM
✅
✅
✅
✅
✅
Conditional Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
T5
✅
✅
✅
✅
✅
BART
✅
✅
✅
✅
✅
Sequence Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
BERT
✅
✅
✅
✅
✅
RoBERTa
✅
✅
✅
✅
✅
GPT-2
✅
✅
✅
✅
Bloom
✅
✅
✅
✅
OPT
✅
✅
✅
✅
GPT-Neo
✅
✅
✅
✅
GPT-J
✅
✅
✅
✅
Deberta
✅
✅
✅
Deberta-v2
✅
✅
✅
Token Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prom
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_9
|
PT-Neo
✅
✅
✅
✅
GPT-J
✅
✅
✅
✅
Deberta
✅
✅
✅
Deberta-v2
✅
✅
✅
Token Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
BERT
✅
✅
RoBERTa
✅
✅
GPT-2
✅
✅
Bloom
✅
✅
OPT
✅
✅
GPT-Neo
✅
✅
GPT-J
✅
✅
Deberta
✅
Deberta-v2
✅
Text-to-Image Generation
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
Stable Diffusion
✅
Image Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
ViT
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_10
|
Tuning
Prompt Tuning
IA3
Stable Diffusion
✅
Image Classification
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
ViT
✅
Swin
✅
Image to text (Multi-modal models)
We have tested LoRA for ViT and Swin for fine-tuning on image classification.
However, it should be possible to use LoRA for any ViT-based model from 🤗 Transformers.
Check out the Image classification task guide to learn more. If you run into problems, please op
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_11
|
ased model from 🤗 Transformers.
Check out the Image classification task guide to learn more. If you run into problems, please open an issue.
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
Blip-2
✅
Semantic Segmentation
As with image-to-text models, you should be able to apply LoRA to any of the segmentation models.
It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, ki
|
b015091ffed1351885950736d0584be2.txt
|
b015091ffed1351885950736d0584be2.txt_chunk_12
|
models.
It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.
Model
LoRA
Prefix Tuning
P-Tuning
Prompt Tuning
IA3
SegFormer
✅
|
b015091ffed1351885950736d0584be2.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_1
|
Image classification using LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune an image classification model.
By using LoRA from 🤗 PEFT, we can reduce the number of trainable parameters in the model to only 0.77% of the original.
LoRA achieves this reduction by adding low-rank “update matrices” to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_2
|
nk “update matrices” to specific blocks of the model, such as the attention
blocks. During fine-tuning, only these matrices are trained, while the original model parameters are left unchanged.
At inference time, the update matrices are merged with the original model parameters to produce the final classification result.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_3
|
.
For more information on LoRA, please refer to the original LoRA paper.
Install dependencies
Install the libraries required for model training:
Copied
!pip install transformers accelerate evaluate datasets peft -q
Check the versions of all required libraries to make sure you are up to date:
Copied
import transformers
import accelerate
import peft
print(f"Transformers version: {transformers.__version__}")
print(f"Accelerate version: {
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_4
|
sformers
import accelerate
import peft
print(f"Transformers version: {transformers.__version__}")
print(f"Accelerate version: {accelerate.__version__}")
print(f"PEFT version: {peft.__version__}")
"Transformers version: 4.27.4"
"Accelerate version: 0.18.0"
"PEFT version: 0.2.0"
Authenticate to share your model
To share the fine-tuned model at the end of the training with the community, authenticate using your 🤗 token.
You can obtain your tok
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_5
|
are the fine-tuned model at the end of the training with the community, authenticate using your 🤗 token.
You can obtain your token from your account settings.
Copied
from huggingface_hub import notebook_login
notebook_login()
Select a model checkpoint to fine-tune
Choose a model checkpoint from any of the model architectures supported for image classification. When in doubt, refer to
the image classification task guide in
🤗 Transformers
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_6
|
architectures supported for image classification. When in doubt, refer to
the image classification task guide in
🤗 Transformers documentation.
Copied
model_checkpoint = "google/vit-base-patch16-224-in21k"
Load a dataset
To keep this example’s runtime short, let’s only load the first 5000 instances from the training set of the Food-101 dataset:
Copied
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:500
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_7
|
ng set of the Food-101 dataset:
Copied
from datasets import load_dataset
dataset = load_dataset("food101", split="train[:5000]")
Dataset preparation
To prepare the dataset for training and evaluation, create label2id and id2label dictionaries. These will come in
handy when performing inference and for metadata information:
Copied
labels = dataset.features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(lab
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_8
|
nformation:
Copied
labels = dataset.features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = i
id2label[i] = label
id2label[2]
"baklava"
Next, load the image processor of the model you’re fine-tuning:
Copied
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained(model_checkpoint)
The image_processor contains useful information on wh
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_9
|
or
image_processor = AutoImageProcessor.from_pretrained(model_checkpoint)
The image_processor contains useful information on which size the training and evaluation images should be resized
to, as well as values that should be used to normalize the pixel values. Using the image_processor, prepare transformation
functions for the datasets. These functions will include data augmentation and pixel scaling:
Copied
from torchvision.transforms imp
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_10
|
ns for the datasets. These functions will include data augmentation and pixel scaling:
Copied
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomHorizontalFlip,
RandomResizedCrop,
Resize,
ToTensor,
)
normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
train_transforms = Compose(
[
RandomResizedCrop(image_processor.size["height"]),
Rando
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_11
|
_processor.image_std)
train_transforms = Compose(
[
RandomResizedCrop(image_processor.size["height"]),
RandomHorizontalFlip(),
ToTensor(),
normalize,
]
)
val_transforms = Compose(
[
Resize(image_processor.size["height"]),
CenterCrop(image_processor.size["height"]),
ToTensor(),
normalize,
]
)
def preprocess_train(example_batch):
"""Apply train_transforms acros
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_12
|
ht"]),
ToTensor(),
normalize,
]
)
def preprocess_train(example_batch):
"""Apply train_transforms across a batch."""
example_batch["pixel_values"] = [train_transforms(image.convert("RGB")) for image in example_batch["image"]]
return example_batch
def preprocess_val(example_batch):
"""Apply val_transforms across a batch."""
example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_13
|
"""Apply val_transforms across a batch."""
example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image in example_batch["image"]]
return example_batch
Split the dataset into training and validation sets:
Copied
splits = dataset.train_test_split(test_size=0.1)
train_ds = splits["train"]
val_ds = splits["test"]
Finally, set the transformation functions for the datasets accordingly:
Copied
train_ds.set_transform(
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_14
|
al_ds = splits["test"]
Finally, set the transformation functions for the datasets accordingly:
Copied
train_ds.set_transform(preprocess_train)
val_ds.set_transform(preprocess_val)
Load and prepare a model
Before loading the model, let’s define a helper function to check the total number of parameters a model has, as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
trainable_params = 0
all_pa
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_15
|
as well
as how many of them are trainable.
Copied
def print_trainable_parameters(model):
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
It’s
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_16
|
nable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
)
It’s important to initialize the original model correctly as it will be used as a base to create the PeftModel you’ll
actually fine-tune. Specify the label2id and id2label so that AutoModelForImageClassification can append a classification
head to the underlying model, adapted for this dataset. You should see the following
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_17
|
Classification can append a classification
head to the underlying model, adapted for this dataset. You should see the following output:
Copied
Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224-in21k and are newly initialized: ['classifier.weight', 'classifier.bias']
Copied
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
model = Auto
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_18
|
'classifier.bias']
Copied
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
model = AutoModelForImageClassification.from_pretrained(
model_checkpoint,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
Before creating a PeftModel, you can check the number of trainable parameters in the
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_19
|
ne-tune an already fine-tuned checkpoint
)
Before creating a PeftModel, you can check the number of trainable parameters in the original model:
Copied
print_trainable_parameters(model)
"trainable params: 85876325 || all params: 85876325 || trainable%: 100.00"
Next, use get_peft_model to wrap the base model so that “update” matrices are added to the respective places.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_20
|
update” matrices are added to the respective places.
Copied
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=16,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="none",
modules_to_save=["classifier"],
)
lora_model = get_peft_model(model, config)
print_trainable_parameters(lora_model)
"trainable params: 667493 || all params: 86466149 || trainable%: 0.77"
Let’s unpack what’s g
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_21
|
nt_trainable_parameters(lora_model)
"trainable params: 667493 || all params: 86466149 || trainable%: 0.77"
Let’s unpack what’s going on here.
To use LoRA, you need to specify the target modules in LoraConfig so that get_peft_model() knows which modules
inside our model need to be amended with LoRA matrices. In this example, we’re only interested in targeting the query and
value matrices of the attention blocks of the base model. Since the param
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_22
|
mple, we’re only interested in targeting the query and
value matrices of the attention blocks of the base model. Since the parameters corresponding to these matrices are “named”
“query” and “value” respectively, we specify them accordingly in the target_modules argument of LoraConfig.
We also specify modules_to_save. After wrapping the base model with get_peft_model() along with the config, we get
a new model where only the LoRA parameters are
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_23
|
fter wrapping the base model with get_peft_model() along with the config, we get
a new model where only the LoRA parameters are trainable (so-called “update matrices”) while the pre-trained parameters
are kept frozen. However, we want the classifier parameters to be trained too when fine-tuning the base model on our
custom dataset. To ensure that the classifier parameters are also trained, we specify modules_to_save. This also
ensures that thes
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_24
|
stom dataset. To ensure that the classifier parameters are also trained, we specify modules_to_save. This also
ensures that these modules are serialized alongside the LoRA trainable parameters when using utilities like save_pretrained()
and push_to_hub().
Here’s what the other parameters mean:
r: The dimension used by the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_25
|
the LoRA update matrices.
alpha: Scaling factor.
bias: Specifies if the bias parameters should be trained. None denotes none of the bias parameters will be trained.
r and alpha together control the total number of final trainable parameters when using LoRA, giving you the flexibility
to balance a trade-off between end performance and compute efficiency.
By looking at the number of trainable parameters, you can see how many parameters we’re actu
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_26
|
performance and compute efficiency.
By looking at the number of trainable parameters, you can see how many parameters we’re actually training. Since the goal is
to achieve parameter-efficient fine-tuning, you should expect to see fewer trainable parameters in the lora_model
in comparison to the original model, which is indeed the case here.
Define training arguments
For model fine-tuning, use Trainer. It accepts
several arguments which you c
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_27
|
indeed the case here.
Define training arguments
For model fine-tuning, use Trainer. It accepts
several arguments which you can wrap using TrainingArguments.
Copied
from transformers import TrainingArguments, Trainer
model_name = model_checkpoint.split("/")[-1]
batch_size = 128
args = TrainingArguments(
f"{model_name}-finetuned-lora-food101",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_28
|
{model_name}-finetuned-lora-food101",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=5e-3,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
per_device_eval_batch_size=batch_size,
fp16=True,
num_train_epochs=5,
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
label_names=[
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_29
|
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
label_names=["labels"],
)
Compared to non-PEFT methods, you can use a larger batch size since there are fewer parameters to train.
You can also set a larger learning rate than the normal (1e-5 for example).
This can potentially also reduce the need to conduct expensive hyperparameter tuning experiments.
Prepare evaluation metric
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_30
|
.
This can potentially also reduce the need to conduct expensive hyperparameter tuning experiments.
Prepare evaluation metric
Copied
import numpy as np
import evaluate
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
"""Computes accuracy on a batch of predictions"""
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
The comp
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_31
|
rgmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
The compute_metrics function takes a named tuple as input: predictions, which are the logits of the model as Numpy arrays,
and label_ids, which are the ground-truth labels as Numpy arrays.
Define collation function
A collation function is used by Trainer to gather a batch of training and evaluation examples and prepare them
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_32
|
lation function
A collation function is used by Trainer to gather a batch of training and evaluation examples and prepare them in a
format that is acceptable by the underlying model.
Copied
import torch
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example["label"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_33
|
labels = torch.tensor([example["label"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
Train and evaluate
Bring everything together - model, training arguments, data, collation function, etc. Then, start the training!
Copied
trainer = Trainer(
lora_model,
args,
train_dataset=train_ds,
eval_dataset=val_ds,
tokenizer=image_processor,
compute_metrics=compute_metrics,
data_c
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_34
|
train_dataset=train_ds,
eval_dataset=val_ds,
tokenizer=image_processor,
compute_metrics=compute_metrics,
data_collator=collate_fn,
)
train_results = trainer.train()
In just a few minutes, the fine-tuned model shows 96% validation accuracy even on this small
subset of the training dataset.
Copied
trainer.evaluate(val_ds)
{
"eval_loss": 0.14475855231285095,
"eval_accuracy": 0.96,
"eval_runtime": 3.5725,
"eval_s
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_35
|
iner.evaluate(val_ds)
{
"eval_loss": 0.14475855231285095,
"eval_accuracy": 0.96,
"eval_runtime": 3.5725,
"eval_samples_per_second": 139.958,
"eval_steps_per_second": 1.12,
"epoch": 5.0,
}
Share your model and run inference
Once the fine-tuning is done, share the LoRA parameters with the community like so:
Copied
repo_name = f"sayakpaul/{model_name}-finetuned-lora-food101"
lora_model.push_to_hub(repo_name)
When call
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_36
|
nity like so:
Copied
repo_name = f"sayakpaul/{model_name}-finetuned-lora-food101"
lora_model.push_to_hub(repo_name)
When calling push_to_hub on the lora_model, only the LoRA parameters along with any modules specified in modules_to_save
are saved. Take a look at the trained LoRA parameters.
You’ll see that it’s only 2.6 MB! This greatly helps with portability, especially when using a very large model to fine-tune (such as BLOOM).
Next, let’s
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_37
|
2.6 MB! This greatly helps with portability, especially when using a very large model to fine-tune (such as BLOOM).
Next, let’s see how to load the LoRA updated parameters along with our base model for inference. When you wrap a base model
with PeftModel, modifications are done in-place. To mitigate any concerns that might stem from in-place modifications,
initialize the base model just like you did earlier and construct the inference model.
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_38
|
might stem from in-place modifications,
initialize the base model just like you did earlier and construct the inference model.
Copied
from peft import PeftConfig, PeftModel
config = PeftConfig.from_pretrained(repo_name)
model = AutoModelForImageClassification.from_pretrained(
config.base_model_name_or_path,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tu
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_39
|
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
# Load the LoRA model
inference_model = PeftModel.from_pretrained(model, repo_name)
Let’s now fetch an example image for inference.
Copied
from PIL import Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
image =
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_40
|
ort Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
image
First, instantiate an image_processor from the underlying model repo.
Copied
image_processor = AutoImageProcessor.from_pretrained(repo_name)
Then, prepare the example for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
F
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_41
|
name)
Then, prepare the example for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
Finally, run inference!
Copied
with torch.no_grad():
outputs = inference_model(**encoding)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", inference_model.config.id2label[predicted_class_idx])
"Predicted class: beignets"
|
78393a73144507c6ac61648a49f8a4ae.txt
|
78393a73144507c6ac61648a49f8a4ae.txt_chunk_42
|
ass:", inference_model.config.id2label[predicted_class_idx])
"Predicted class: beignets"
|
78393a73144507c6ac61648a49f8a4ae.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_1
|
Prompt tuning for causal language modeling
Prompting helps guide language model behavior by adding some input text specific to a task. Prompt tuning is an additive method for only training and updating the newly added prompt tokens to a pretrained model. This way, you can use one pretrained model whose weights are frozen, and train and update a smaller set of prompt parameters for each downstream task instead of fully finetuning a
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_2
|
ghts are frozen, and train and update a smaller set of prompt parameters for each downstream task instead of fully finetuning a separate model. As models grow larger and larger, prompt tuning can be more efficient, and results are even better as model parameters scale.
💡 Read The Power of Scale for Parameter-Efficient Prompt Tuning to learn more about prompt tuning.
This guide will show you how to apply prompt tuning to train a bloomz-560m mode
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_3
|
Prompt Tuning to learn more about prompt tuning.
This guide will show you how to apply prompt tuning to train a bloomz-560m model on the twitter_complaints subset of the RAFT dataset.
Before you begin, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets
Setup
Start by defining the model and tokenizer, the dataset and the dataset columns to train on, some training hyperparameters, and
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_4
|
Start by defining the model and tokenizer, the dataset and the dataset columns to train on, some training hyperparameters, and the PromptTuningConfig. The PromptTuningConfig contains information about the task type, the text to initialize the prompt embedding, the number of virtual tokens, and the tokenizer to use:
Copied
from transformers import AutoModelForCausalLM, AutoTokenizer, default_data_collator, get_linear_schedule_with_warmup
fr
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_5
|
Copied
from transformers import AutoModelForCausalLM, AutoTokenizer, default_data_collator, get_linear_schedule_with_warmup
from peft import get_peft_config, get_peft_model, PromptTuningInit, PromptTuningConfig, TaskType, PeftType
import torch
from datasets import load_dataset
import os
from torch.utils.data import DataLoader
from tqdm import tqdm
device = "cuda"
model_name_or_path = "bigscience/bloomz-560m"
tokenizer_name_or_path = "bigscie
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_6
|
ataLoader
from tqdm import tqdm
device = "cuda"
model_name_or_path = "bigscience/bloomz-560m"
tokenizer_name_or_path = "bigscience/bloomz-560m"
peft_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
num_virtual_tokens=8,
prompt_tuning_init_text="Classify if the tweet is a complaint or not:",
tokenizer_name_or_path=model_name_or_path,
)
dataset_name = "twitter_complaints"
c
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_7
|
fy if the tweet is a complaint or not:",
tokenizer_name_or_path=model_name_or_path,
)
dataset_name = "twitter_complaints"
checkpoint_name = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}_v1.pt".replace(
"/", "_"
)
text_column = "Tweet text"
label_column = "text_label"
max_length = 64
lr = 3e-2
num_epochs = 50
batch_size = 8
Load dataset
For this guide, you’ll load the twitter_complaints subset
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_8
|
_length = 64
lr = 3e-2
num_epochs = 50
batch_size = 8
Load dataset
For this guide, you’ll load the twitter_complaints subset of the RAFT dataset. This subset contains tweets that are labeled either complaint or no complaint:
Copied
dataset = load_dataset("ought/raft", dataset_name)
dataset["train"][0]
{"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2}
To make the Label column more readable, replace the Label val
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.