chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_9
|
t": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2}
To make the Label column more readable, replace the Label value with the corresponding label text and store them in a text_label column. You can use the map function to apply this change over the entire dataset in one step:
Copied
classes = [k.replace("_", " ") for k in dataset["train"].features["Label"].names]
dataset = dataset.map(
lambda x: {"text_label": [classes[labe
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_10
|
e("_", " ") for k in dataset["train"].features["Label"].names]
dataset = dataset.map(
lambda x: {"text_label": [classes[label] for label in x["Label"]]},
batched=True,
num_proc=1,
)
dataset["train"][0]
{"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2, "text_label": "no complaint"}
Preprocess dataset
Next, you’ll setup a tokenizer; configure the appropriate padding token to use for padding sequences, an
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_11
|
}
Preprocess dataset
Next, you’ll setup a tokenizer; configure the appropriate padding token to use for padding sequences, and determine the maximum length of the tokenized labels:
Copied
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
if tokenizer.pad_token_id is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes])
print(targ
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_12
|
tokenizer.eos_token_id
target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes])
print(target_max_length)
3
Create a preprocess_function to:
Tokenize the input text and labels.
For each example in a batch, pad the labels with the tokenizers pad_token_id.
Concatenate the input text and labels into the model_inputs.
Create a separate attention mask for labels and model_inputs.
Loop through each example in the
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_13
|
nd labels into the model_inputs.
Create a separate attention mask for labels and model_inputs.
Loop through each example in the batch again to pad the input ids, labels, and attention mask to the max_length and convert them to PyTorch tensors.
Copied
def preprocess_function(examples):
batch_size = len(examples[text_column])
inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]]
targets = [str(x) for x in exampl
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_14
|
ext_column])
inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]]
targets = [str(x) for x in examples[label_column]]
model_inputs = tokenizer(inputs)
labels = tokenizer(targets)
for i in range(batch_size):
sample_input_ids = model_inputs["input_ids"][i]
label_input_ids = labels["input_ids"][i] + [tokenizer.pad_token_id]
# print(i, sample_input_ids, label_input_ids)
model_i
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_15
|
ut_ids = labels["input_ids"][i] + [tokenizer.pad_token_id]
# print(i, sample_input_ids, label_input_ids)
model_inputs["input_ids"][i] = sample_input_ids + label_input_ids
labels["input_ids"][i] = [-100] * len(sample_input_ids) + label_input_ids
model_inputs["attention_mask"][i] = [1] * len(model_inputs["input_ids"][i])
# print(model_inputs)
for i in range(batch_size):
sample_input_ids = model_inpu
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_16
|
en(model_inputs["input_ids"][i])
# print(model_inputs)
for i in range(batch_size):
sample_input_ids = model_inputs["input_ids"][i]
label_input_ids = labels["input_ids"][i]
model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * (
max_length - len(sample_input_ids)
) + sample_input_ids
model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_17
|
+ sample_input_ids
model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[
"attention_mask"
][i]
labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids
model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length])
model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_18
|
inputs["input_ids"][i][:max_length])
model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length])
labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length])
model_inputs["labels"] = labels["input_ids"]
return model_inputs
Use the map function to apply the preprocess_function to the entire dataset. You can remove the unprocessed columns since the model won’t need them:
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_19
|
o apply the preprocess_function to the entire dataset. You can remove the unprocessed columns since the model won’t need them:
Copied
processed_datasets = dataset.map(
preprocess_function,
batched=True,
num_proc=1,
remove_columns=dataset["train"].column_names,
load_from_cache_file=False,
desc="Running tokenizer on dataset",
)
Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the dat
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_20
|
="Running tokenizer on dataset",
)
Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU.
Copied
train_dataset = processed_datasets["train"]
eval_dataset = processed_datasets["test"]
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_21
|
oader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
Train
You’re almost ready to setup your model and start training!
Initialize a base model from AutoModelForCausalLM, and pass it and peft_config to the get_peft_model() function to create a PeftModel.
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_22
|
lize a base model from AutoModelForCausalLM, and pass it and peft_config to the get_peft_model() function to create a PeftModel. You can print the new PeftModel’s trainable parameters to see how much more efficient it is than training the full parameters of the original model!
Copied
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
model = get_peft_model(model, peft_config)
print(model.print_trainable_parameters())
"trainable
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_23
|
m_pretrained(model_name_or_path)
model = get_peft_model(model, peft_config)
print(model.print_trainable_parameters())
"trainable params: 8192 || all params: 559222784 || trainable%: 0.0014648902430985358"
Setup an optimizer and learning rate scheduler:
Copied
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=(len(tra
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_24
|
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=(len(train_dataloader) * num_epochs),
)
Move the model to the GPU, then write a training loop to start training!
Copied
model = model.to(device)
for epoch in range(num_epochs):
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
batch = {k: v.to(device) for k, v in batch.i
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_25
|
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
eval_loss = 0
eval_preds = []
for step, batch in enumerate(tqdm(eval_d
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_26
|
optimizer.zero_grad()
model.eval()
eval_loss = 0
eval_preds = []
for step, batch in enumerate(tqdm(eval_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
eval_loss += loss.detach().float()
eval_preds.extend(
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu()
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_27
|
s.detach().float()
eval_preds.extend(
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
)
eval_epoch_loss = eval_loss / len(eval_dataloader)
eval_ppl = torch.exp(eval_epoch_loss)
train_epoch_loss = total_loss / len(train_dataloader)
train_ppl = torch.exp(train_epoch_loss)
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_e
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_28
|
taloader)
train_ppl = torch.exp(train_epoch_loss)
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
Share model
You can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted:
Copied
from huggingface_hub import notebook_login
notebook_login()
Use the push_to_hub function to upload your model to a model repository on the Hub:
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_29
|
import notebook_login
notebook_login()
Use the push_to_hub function to upload your model to a model repository on the Hub:
Copied
peft_model_id = "your-name/bloomz-560m_PROMPT_TUNING_CAUSAL_LM"
model.push_to_hub("your-name/bloomz-560m_PROMPT_TUNING_CAUSAL_LM", use_auth_token=True)
Once the model is uploaded, you’ll see the model file size is only 33.5kB! 🤏
Inference
Let’s try the model on a sample input for inference. If you look at the
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_30
|
l see the model file size is only 33.5kB! 🤏
Inference
Let’s try the model on a sample input for inference. If you look at the repository you uploaded the model to, you’ll see a adapter_config.json file. Load this file into PeftConfig to specify the peft_type and task_type. Then you can load the prompt tuned model weights, and the configuration into from_pretrained() to create the PeftModel:
Copied
from peft import PeftModel, PeftConfig
p
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_31
|
ights, and the configuration into from_pretrained() to create the PeftModel:
Copied
from peft import PeftModel, PeftConfig
peft_model_id = "stevhliu/bloomz-560m_PROMPT_TUNING_CAUSAL_LM"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
Grab a tweet and tokenize it:
Copied
inputs = tokenizer(
f'{text_c
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_32
|
odel = PeftModel.from_pretrained(model, peft_model_id)
Grab a tweet and tokenize it:
Copied
inputs = tokenizer(
f'{text_column} : {"@nationalgridus I have no water and the bill is current and paid. Can you do something about this?"} Label : ',
return_tensors="pt",
)
Put the model on a GPU and generate the predicted label:
Copied
model.to(device)
with torch.no_grad():
inputs = {k: v.to(device) for k, v in inputs.items()}
o
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_33
|
edicted label:
Copied
model.to(device)
with torch.no_grad():
inputs = {k: v.to(device) for k, v in inputs.items()}
outputs = model.generate(
input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3
)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
[
"Tweet text : @nationalgridus I have no water and the bill is current and pai
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
176b966ffabdbae904d70b3c31cdac0c.txt_chunk_34
|
().cpu().numpy(), skip_special_tokens=True))
[
"Tweet text : @nationalgridus I have no water and the bill is current and paid. Can you do something about this? Label : complaint"
]
|
176b966ffabdbae904d70b3c31cdac0c.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_1
|
LoRA
This conceptual guide gives a brief overview of LoRA, a technique that accelerates
the fine-tuning of large models while consuming less memory.
To make fine-tuning more efficient, LoRA’s approach is to represent the weight updates with two smaller
matrices (called update matrices) through low-rank decomposition. These new matrices can be trained to adapt to the
new data while keeping the overall number of changes low. The original weig
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_2
|
n. These new matrices can be trained to adapt to the
new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn’t receive
any further adjustments. To produce the final results, both the original and the adapted weights are combined.
This approach has a number of advantages:
LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.
The original pre-traine
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_3
|
ages:
LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.
The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
LoRA is orthogonal to many other parameter-efficient methods and can be combined with many of them.
Performance of models fine-tuned using LoRA is comparable to the perfor
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_4
|
efficient methods and can be combined with many of them.
Performance of models fine-tuned using LoRA is comparable to the performance of fully fine-tuned models.
LoRA does not add any inference latency because adapter weights can be merged with the base model.
In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
parameters. However, for simplicity and further parameter efficien
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_5
|
atrices in a neural network to reduce the number of trainable
parameters. However, for simplicity and further parameter efficiency, in Transformer models LoRA is typically applied to
attention blocks only. The resulting number of trainable parameters in a LoRA model depends on the size of the low-rank
update matrices, which is determined mainly by the rank r and the shape of the original weight matrix.
Merge LoRA weights into the base model
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_6
|
which is determined mainly by the rank r and the shape of the original weight matrix.
Merge LoRA weights into the base model
While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA model. To eliminate latency, use the merge_and_unload() function to merge the adapter weights with the base model which allows you to effectively use the newly
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_7
|
e the merge_and_unload() function to merge the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.
This works because during training, the smaller weight matrices (A and B in the diagram above) are separate. But once training is complete, the weights can actually be merged into a new weight matrix that is identical.
Utils for LoRA
Use merge_adapter() to merge the LoRa layers
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_8
|
n actually be merged into a new weight matrix that is identical.
Utils for LoRA
Use merge_adapter() to merge the LoRa layers into the base model while retaining the PeftModel.
This will help in later unmerging, deleting, loading different adapters and so on.
Use unmerge_adapter() to unmerge the LoRa layers from the base model while retaining the PeftModel.
This will help in later merging, deleting, loading different adapters and so on.
Use u
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_9
|
base model while retaining the PeftModel.
This will help in later merging, deleting, loading different adapters and so on.
Use unload() to get back the base model without the merging of the active lora modules.
This will help when you want to get back the pretrained base model in some applications when you want to reset the model to its original state.
For example, in Stable Diffusion WebUi, when the user wants to infer with base model post try
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_10
|
t the model to its original state.
For example, in Stable Diffusion WebUi, when the user wants to infer with base model post trying out LoRAs.
Use delete_adapter() to delete an existing adapter.
Use add_weighted_adapter() to combine multiple LoRAs into a new adapter based on the user provided weighing scheme.
Common LoRA parameters in PEFT
As with other methods supported by PEFT, to fine-tune a model using LoRA, you need to:
Instantiate a ba
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_11
|
oRA parameters in PEFT
As with other methods supported by PEFT, to fine-tune a model using LoRA, you need to:
Instantiate a base model.
Create a configuration (LoraConfig) where you define LoRA-specific parameters.
Wrap the base model with get_peft_model() to get a trainable PeftModel.
Train the PeftModel as you normally would train the base model.
LoraConfig allows you to control how LoRA is applied to the base model through the following pa
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_12
|
ally would train the base model.
LoraConfig allows you to control how LoRA is applied to the base model through the following parameters:
r: the rank of the update matrices, expressed in int. Lower rank results in smaller update matrices with fewer trainable parameters.
target_modules: The modules (for example, attention blocks) to apply the LoRA update matrices.
alpha: LoRA scaling factor.
bias: Specifies if the bias parameters should be trai
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_13
|
ion blocks) to apply the LoRA update matrices.
alpha: LoRA scaling factor.
bias: Specifies if the bias parameters should be trained. Can be 'none', 'all' or 'lora_only'.
modules_to_save: List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.
layers_to_transform: List of layers to be transformed by LoRA. If
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_14
|
om head that is randomly initialized for the fine-tuning task.
layers_to_transform: List of layers to be transformed by LoRA. If not specified, all layers in target_modules are transformed.
layers_pattern: Pattern to match layer names in target_modules, if layers_to_transform is specified. By default PeftModel will look at common layer pattern (layers, h, blocks, etc.), use it for exotic and custom models.
LoRA examples
For an example of LoR
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_15
|
k at common layer pattern (layers, h, blocks, etc.), use it for exotic and custom models.
LoRA examples
For an example of LoRA method application to various downstream tasks, please refer to the following guides:
Image classification using LoRA
Semantic segmentation
While the original paper focuses on language models, the technique can be applied to any dense layers in deep learning
models. As such, you can leverage this technique with diffu
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
5b417ea7788f707968c7b3e47dee06e5.txt_chunk_16
|
s, the technique can be applied to any dense layers in deep learning
models. As such, you can leverage this technique with diffusion models. See Dreambooth fine-tuning with LoRA task guide for an example.
|
5b417ea7788f707968c7b3e47dee06e5.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_1
|
Models
PeftModel is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base PeftModel contains methods for loading and saving models from the Hub, and supports the PromptEncoder for prompt learning.
PeftModel
class peft.PeftModel
<
source
>
(
model: PreTrainedModel
peft_config: PeftConfig
adapter_name: str = 'default'
)
Parameters
model (PreTrainedModel) — The base transfo
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_2
|
eTrainedModel
peft_config: PeftConfig
adapter_name: str = 'default'
)
Parameters
model (PreTrainedModel) — The base transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
Base model encompassing various Peft methods.
Attributes:
base_model (PreTrainedModel) — The base transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
modules_to_save (list of str)
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_3
|
transformer model used for Peft.
peft_config (PeftConfig) — The configuration of the Peft model.
modules_to_save (list of str) — The list of sub-module names to save when
saving the model.
prompt_encoder (PromptEncoder) — The prompt encoder used for Peft if
using PromptLearningConfig.
prompt_tokens (torch.Tensor) — The virtual prompt tokens used for Peft if
using PromptLearningConfig.
transformer_backbone_name (str) — The name of the transform
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_4
|
e virtual prompt tokens used for Peft if
using PromptLearningConfig.
transformer_backbone_name (str) — The name of the transformer
backbone in the base model if using PromptLearningConfig.
word_embeddings (torch.nn.Embedding) — The word embeddings of the transformer backbone
in the base model if using PromptLearningConfig.
create_or_update_model_card
<
source
>
(
output_dir: str
)
Updates or create model card to include information about
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_5
|
fig.
create_or_update_model_card
<
source
>
(
output_dir: str
)
Updates or create model card to include information about peft:
Adds peft library tag
Adds peft version
Adds quantization information if it was used
disable_adapter
<
source
>
(
)
Disables the adapter module.
forward
<
source
>
(
*args: Any
**kwargs: Any
)
Forward pass of the model.
from_pretrained
<
source
>
(
model: PreTrainedModel
model_id: Union[str, os.PathL
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_6
|
kwargs: Any
)
Forward pass of the model.
from_pretrained
<
source
>
(
model: PreTrainedModel
model_id: Union[str, os.PathLike]
adapter_name: str = 'default'
is_trainable: bool = False
config: Optional[PeftConfig] = None
**kwargs: Any
)
Parameters
model (PreTrainedModel) —
The model to be adapted. The model should be initialized with the
from_pretrained method from the 🤗 Transformers library.
model_id (str or os.PathLike) —
The name
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_7
|
hould be initialized with the
from_pretrained method from the 🤗 Transformers library.
model_id (str or os.PathLike) —
The name of the Lora configuration to use. Can be either:
A string, the model id of a Lora configuration hosted inside a model repo on the Hugging Face
Hub.
A path to a directory containing a Lora configuration file saved using the save_pretrained
method (./my_lora_config_directory/).
adapter_name (str, optional, defaults t
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_8
|
figuration file saved using the save_pretrained
method (./my_lora_config_directory/).
adapter_name (str, optional, defaults to "default") —
The name of the adapter to be loaded. This is useful for loading multiple adapters.
is_trainable (bool, optional, defaults to False) —
Whether the adapter should be trainable or not. If False, the adapter will be frozen and use for
inference
config (PeftConfig, optional) —
The configuration object to
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_9
|
or not. If False, the adapter will be frozen and use for
inference
config (PeftConfig, optional) —
The configuration object to use instead of an automatically loaded configuation. This configuration
object is mutually exclusive with model_id and kwargs. This is useful when configuration is already
loaded before calling from_pretrained.
kwargs — (optional):
Additional keyword arguments passed along to the specific Lora configuration class.
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_10
|
ng from_pretrained.
kwargs — (optional):
Additional keyword arguments passed along to the specific Lora configuration class.
Instantiate a LoraModel from a pretrained Lora configuration and weights.
get_base_model
<
source
>
(
)
Returns the base model.
get_nb_trainable_parameters
<
source
>
(
)
Returns the number of trainable parameters and number of all parameters in the model.
get_prompt
<
source
>
(
batch_size: int
)
Retur
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_11
|
number of trainable parameters and number of all parameters in the model.
get_prompt
<
source
>
(
batch_size: int
)
Returns the virtual prompts to use for Peft. Only applicable when peft_config.peft_type != PeftType.LORA.
get_prompt_embedding_to_save
<
source
>
(
adapter_name: str
)
Returns the prompt embedding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA.
print_trainable_parameters
<
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_12
|
dding to save when saving the model. Only applicable when peft_config.peft_type != PeftType.LORA.
print_trainable_parameters
<
source
>
(
)
Prints the number of trainable parameters in the model.
save_pretrained
<
source
>
(
save_directory: str
safe_serialization: bool = False
selected_adapters: Optional[List[str]] = None
**kwargs: Any
)
Parameters
save_directory (str) —
Directory where the adapter model and configuration files will
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_13
|
r]] = None
**kwargs: Any
)
Parameters
save_directory (str) —
Directory where the adapter model and configuration files will be saved (will be created if it does not
exist).
kwargs (additional keyword arguments, optional) —
Additional keyword arguments passed along to the push_to_hub method.
This function saves the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the LoraModel.from_pret
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_14
|
s the adapter model and the adapter configuration files to a directory, so that it can be
reloaded using the LoraModel.from_pretrained class method, and also used by the LoraModel.push_to_hub
method.
set_adapter
<
source
>
(
adapter_name: str
)
Sets the active adapter.
PeftModelForSequenceClassification
A PeftModel for sequence classification tasks.
class peft.PeftModelForSequenceClassification
<
source
>
(
model
peft_config: PeftCon
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_15
|
Model for sequence classification tasks.
class peft.PeftModelForSequenceClassification
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for sequence classification tasks.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classific
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_16
|
ibutes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForSequenceClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 2
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_17
|
"peft_type": "PREFIX_TUNING",
... "task_type": "SEQ_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = Aut
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_18
|
n": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForSequenceClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForTokenClassification
A PeftModel for to
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_19
|
params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForTokenClassification
A PeftModel for token classification tasks.
class peft.PeftModelForTokenClassification
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for token classification tasks.
Attributes:
config
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_20
|
se transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for token classification tasks.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForSequenceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "P
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_21
|
enceClassification
>>> from peft import PeftModelForTokenClassification, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "TOKEN_CLS",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 768,
... "num_transformer_submodules": 1,
... "num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
...
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_22
|
num_attention_heads": 12,
... "num_layers": 12,
... "encoder_hidden_size": 768,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || al
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_23
|
= PeftModelForTokenClassification(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 370178 || all params: 108680450 || trainable%: 0.3406113979101117
PeftModelForCausalLM
A PeftModel for causal language modeling.
class peft.PeftModelForCausalLM
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — P
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_24
|
nfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for causal language modeling.
Example:
Copied
>>> from transformers import AutoModelForCausalLM
>>> from peft import PeftModelForCausalLM, get_peft_config
>>> config = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_v
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_25
|
nfig = {
... "peft_type": "PREFIX_TUNING",
... "task_type": "CAUSAL_LM",
... "inference_mode": False,
... "num_virtual_tokens": 20,
... "token_dim": 1280,
... "num_transformer_submodules": 1,
... "num_attention_heads": 20,
... "num_layers": 36,
... "encoder_hidden_size": 1280,
... "prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(conf
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_26
|
"prefix_projection": False,
... "postprocess_past_key_value_function": None,
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForCausalLM.from_pretrained("gpt2-large")
>>> peft_model = PeftModelForCausalLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 1843200 || all params: 775873280 || trainable%: 0.23756456724479544
PeftModelForSeq2SeqLM
A PeftModel for sequence-to-sequence lan
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_27
|
00 || all params: 775873280 || trainable%: 0.23756456724479544
PeftModelForSeq2SeqLM
A PeftModel for sequence-to-sequence language modeling.
class peft.PeftModelForSeq2SeqLM
<
source
>
(
model
peft_config: PeftConfig
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for sequence-to-sequence language modeling.
Example:
Copied
>>> from tran
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_28
|
t_config (PeftConfig) — Peft config.
Peft model for sequence-to-sequence language modeling.
Example:
Copied
>>> from transformers import AutoModelForSeq2SeqLM
>>> from peft import PeftModelForSeq2SeqLM, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "SEQ_2_SEQ_LM",
... "inference_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_29
|
erence_mode": False,
... "r": 8,
... "target_modules": ["q", "v"],
... "lora_alpha": 32,
... "lora_dropout": 0.1,
... "fan_in_fan_out": False,
... "enable_lora": None,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 8
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_30
|
ase")
>>> peft_model = PeftModelForSeq2SeqLM(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 884736 || all params: 223843584 || trainable%: 0.3952474242013566
PeftModelForQuestionAnswering
A PeftModel for question answering.
class peft.PeftModelForQuestionAnswering
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_31
|
del
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for extractive question answering.
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
cls_layer_name (str) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForQuestionAnswering
>>> f
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_32
|
tr) — The name of the classification layer.
Example:
Copied
>>> from transformers import AutoModelForQuestionAnswering
>>> from peft import PeftModelForQuestionAnswering, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "QUESTION_ANS",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_33
|
6,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForQuestionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_34
|
stionAnswering(model, peft_config)
>>> peft_model.print_trainable_parameters()
trainable params: 592900 || all params: 108312580 || trainable%: 0.5473971721475013
PeftModelForFeatureExtraction
A PeftModel for getting extracting features/embeddings from transformer models.
class peft.PeftModelForFeatureExtraction
<
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base trans
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_35
|
source
>
(
model
peft_config: PeftConfig = None
adapter_name = 'default'
)
Parameters
model (PreTrainedModel) — Base transformer model.
peft_config (PeftConfig) — Peft config.
Peft model for extracting features/embeddings from transformer models
Attributes:
config (PretrainedConfig) — The configuration object of the base model.
Example:
Copied
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtracti
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_36
|
t of the base model.
Example:
Copied
>>> from transformers import AutoModel
>>> from peft import PeftModelForFeatureExtraction, get_peft_config
>>> config = {
... "peft_type": "LORA",
... "task_type": "FEATURE_EXTRACTION",
... "inference_mode": False,
... "r": 16,
... "target_modules": ["query", "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
...
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_37
|
, "value"],
... "lora_alpha": 32,
... "lora_dropout": 0.05,
... "fan_in_fan_out": False,
... "bias": "none",
... }
>>> peft_config = get_peft_config(config)
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> peft_model = PeftModelForFeatureExtraction(model, peft_config)
>>> peft_model.print_trainable_parameters()
|
f6428888c0a56974a5a0734b7621bd20.txt
|
f6428888c0a56974a5a0734b7621bd20.txt_chunk_38
|
ainable_parameters()
|
f6428888c0a56974a5a0734b7621bd20.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_1
|
Prefix tuning for conditional generation
Prefix tuning is an additive method where only a sequence of continuous task-specific vectors is attached to the beginning of the input, or prefix. Only the prefix parameters are optimized and added to the hidden states in every layer of the model. The tokens of the input sequence can still attend to the prefix as virtual tokens. As a result, prefix tuning stores 1000x fewer parameters than
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_2
|
input sequence can still attend to the prefix as virtual tokens. As a result, prefix tuning stores 1000x fewer parameters than a fully finetuned model, which means you can use one large language model for many tasks.
💡 Read Prefix-Tuning: Optimizing Continuous Prompts for Generation to learn more about prefix tuning.
This guide will show you how to apply prefix tuning to train a t5-large model on the sentences_allagree subset of the financial
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_3
|
This guide will show you how to apply prefix tuning to train a t5-large model on the sentences_allagree subset of the financial_phrasebank dataset.
Before you begin, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets
Setup
Start by defining the model and tokenizer, text and label columns, and some hyperparameters so it’ll be easier to start training faster later. Set the environmen
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_4
|
okenizer, text and label columns, and some hyperparameters so it’ll be easier to start training faster later. Set the environment variable TOKENIZERS_PARALLELSIM to false to disable the fast Rust-based tokenizer which processes data in parallel by default so you can use multiprocessing in Python.
Copied
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, default_data_collator, get_linear_schedule_with_warmup
from peft import get_p
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_5
|
rmers import AutoTokenizer, AutoModelForSeq2SeqLM, default_data_collator, get_linear_schedule_with_warmup
from peft import get_peft_config, get_peft_model, get_peft_model_state_dict, PrefixTuningConfig, TaskType
from datasets import load_dataset
from torch.utils.data import DataLoader
from tqdm import tqdm
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
device = "cuda"
model_name_
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_6
|
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
device = "cuda"
model_name_or_path = "t5-large"
tokenizer_name_or_path = "t5-large"
text_column = "sentence"
label_column = "text_label"
max_length = 128
lr = 1e-2
num_epochs = 5
batch_size = 8
Load dataset
For this guide, you’ll train on the sentences_allagree subset of the financial_phrasebank dataset. This dataset contains financial news
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_7
|
guide, you’ll train on the sentences_allagree subset of the financial_phrasebank dataset. This dataset contains financial news categorized by sentiment.
Use 🤗 Datasets train_test_split function to create a training and validation split and convert the label value to the more readable text_label. All of the changes can be applied with the map function:
Copied
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", "
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_8
|
be applied with the map function:
Copied
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", "sentences_allagree")
dataset = dataset["train"].train_test_split(test_size=0.1)
dataset["validation"] = dataset["test"]
del dataset["test"]
classes = dataset["train"].features["label"].names
dataset = dataset.map(
lambda x: {"text_label": [classes[label] for label in x["label"]]},
batched=True,
num_proc=1,
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_9
|
dataset = dataset.map(
lambda x: {"text_label": [classes[label] for label in x["label"]]},
batched=True,
num_proc=1,
)
dataset["train"][0]
{"sentence": "Profit before taxes was EUR 4.0 mn , down from EUR 4.9 mn .", "label": 0, "text_label": "negative"}
Preprocess dataset
Initialize a tokenizer, and create a function to pad and truncate the model_inputs and labels:
Copied
tokenizer = AutoTokenizer.from_pretrained(model_name_or
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_10
|
te a function to pad and truncate the model_inputs and labels:
Copied
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
def preprocess_function(examples):
inputs = examples[text_column]
targets = examples[label_column]
model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True,
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_11
|
gth", truncation=True, return_tensors="pt")
labels = tokenizer(targets, max_length=2, padding="max_length", truncation=True, return_tensors="pt")
labels = labels["input_ids"]
labels[labels == tokenizer.pad_token_id] = -100
model_inputs["labels"] = labels
return model_inputs
Use the map function to apply the preprocess_function to the dataset. You can remove the unprocessed columns since the model doesn’t need them anymore:
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_12
|
pply the preprocess_function to the dataset. You can remove the unprocessed columns since the model doesn’t need them anymore:
Copied
processed_datasets = dataset.map(
preprocess_function,
batched=True,
num_proc=1,
remove_columns=dataset["train"].column_names,
load_from_cache_file=False,
desc="Running tokenizer on dataset",
)
Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the dat
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_13
|
="Running tokenizer on dataset",
)
Create a DataLoader from the train and eval datasets. Set pin_memory=True to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU.
Copied
train_dataset = processed_datasets["train"]
eval_dataset = processed_datasets["validation"]
train_dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=Tr
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_14
|
dataloader = DataLoader(
train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
)
eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
Train model
Now you can setup your model and make sure it is ready for training. Specify the task in PrefixTuningConfig, create the base t5-large model from AutoModelForSeq2SeqLM, and then wrap t
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_15
|
for training. Specify the task in PrefixTuningConfig, create the base t5-large model from AutoModelForSeq2SeqLM, and then wrap the model and configuration in a PeftModel. Feel free to print the PeftModel’s parameters and compare it to fully training all the model parameters to see how much more efficient it is!
Copied
peft_config = PrefixTuningConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, num_virtual_tokens=20)
model = AutoM
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_16
|
ed
peft_config = PrefixTuningConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, num_virtual_tokens=20)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 983040 || all params: 738651136 || trainable%: 0.13308583065659835"
Setup the optimizer and learning rate scheduler:
Copied
optimizer = torch.optim.AdamW(model.paramet
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_17
|
le%: 0.13308583065659835"
Setup the optimizer and learning rate scheduler:
Copied
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=(len(train_dataloader) * num_epochs),
)
Move the model to the GPU, and then write a training loop to begin!
Copied
model = model.to(device)
for epoch in range(num_epochs):
model.
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_18
|
the GPU, and then write a training loop to begin!
Copied
model = model.to(device)
for epoch in range(num_epochs):
model.train()
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
total_loss += loss.detach().float()
loss.backward()
optimizer.step()
lr_scheduler.
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_19
|
outputs.loss
total_loss += loss.detach().float()
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
model.eval()
eval_loss = 0
eval_preds = []
for step, batch in enumerate(tqdm(eval_dataloader)):
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
eval_lo
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
14c02d08a3d458adfe91e8c9d925a41a.txt_chunk_20
|
in batch.items()}
with torch.no_grad():
outputs = model(**batch)
loss = outputs.loss
eval_loss += loss.detach().float()
eval_preds.extend(
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
)
eval_epoch_loss = eval_loss / len(eval_dataloader)
eval_ppl = torch.exp(eval_epoch_loss)
train_epoch_loss = total_loss / len
|
14c02d08a3d458adfe91e8c9d925a41a.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.