chunk_id
stringlengths 44
45
| chunk_content
stringlengths 21
448
| filename
stringlengths 36
36
|
---|---|---|
b586f643073f7948b3098cace31f8c69.txt_chunk_38
|
might stem from in-place modifications,
initialize the base model just like you did earlier and construct the inference model.
Copied
from peft import PeftConfig, PeftModel
config = PeftConfig.from_pretrained(repo_name)
model = AutoModelForImageClassification.from_pretrained(
config.base_model_name_or_path,
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tu
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_39
|
label2id=label2id,
id2label=id2label,
ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint
)
# Load the LoRA model
inference_model = PeftModel.from_pretrained(model, repo_name)
Let’s now fetch an example image for inference.
Copied
from PIL import Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
image =
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_40
|
ort Image
import requests
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
image
First, instantiate an image_processor from the underlying model repo.
Copied
image_processor = AutoImageProcessor.from_pretrained(repo_name)
Then, prepare the example for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
F
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_41
|
name)
Then, prepare the example for inference.
Copied
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
Finally, run inference!
Copied
with torch.no_grad():
outputs = inference_model(**encoding)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", inference_model.config.id2label[predicted_class_idx])
"Predicted class: beignets"
|
b586f643073f7948b3098cace31f8c69.txt
|
b586f643073f7948b3098cace31f8c69.txt_chunk_42
|
ass:", inference_model.config.id2label[predicted_class_idx])
"Predicted class: beignets"
|
b586f643073f7948b3098cace31f8c69.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_1
|
Fully Sharded Data Parallel
Fully sharded data parallel (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
Currently, FSDP does n
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_2
|
CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
Currently, FSDP does not confer any reduction in GPU memory usage and FSDP with CPU offload actually consumes 1.65x more GPU memory during training. You can track this PyTorch issue for any updates.
FSDP is supported in 🤗 Accelerate, and you can use it with 🤗 PEFT. This guide will help you learn how to use our FSDP training script. You’ll c
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_3
|
ed in 🤗 Accelerate, and you can use it with 🤗 PEFT. This guide will help you learn how to use our FSDP training script. You’ll configure the script to train a large model for conditional generation.
Configuration
Begin by running the following command to create a FSDP configuration file with 🤗 Accelerate. Use the --config_file flag to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in t
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_4
|
config_file flag to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in the 🤗 Accelerate cache.
The configuration file is used to set the default options when you launch the training script.
Copied
accelerate config --config_file fsdp_config.yaml
You’ll be asked a few questions about your setup, and configure the following arguments. For this example, make sure you fully shard the model
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_5
|
a few questions about your setup, and configure the following arguments. For this example, make sure you fully shard the model parameters, gradients, optimizer states, leverage the CPU for offloading, and wrap model layers based on the Transformer layer class name.
Copied
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload P
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_6
|
optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_7
|
ng `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.g,
`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`...
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_8
|
ARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
For example, your FSDP configuration file may look like the following:
Copied
command_file: null
commands: null
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: FSDP
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefe
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_9
|
FSDP
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: true
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: T5Block
gpu_ids: null
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_10
|
k: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_name: null
tpu_zone: null
use_cpu: false
The important parts
Let’s dig a bit deeper into the training script to understand how it works.
The main() function begins with initializing an Accelerator class which handles everything for distributed
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_11
|
erstand how it works.
The main() function begins with initializing an Accelerator class which handles everything for distributed training, such as automatically detecting your training environment.
💡 Feel free to change the model and dataset inside the main function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
The script also creates a configuration corresponding t
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_12
|
the script, you may also need to write your own preprocessing function.
The script also creates a configuration corresponding to the 🤗 PEFT method you’re using. For LoRA, you’ll use LoraConfig to specify the task type, and several other important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, replace LoraConfig
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_13
|
scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, replace LoraConfig with the appropriate class.
Next, the script wraps the base model and peft_config with the get_peft_model() function to create a PeftModel.
Copied
def main():
+ accelerator = Accelerator()
model_name_or_path = "t5-base"
base_path = "temp/data/FinancialPhraseBank-v1.0"
+ peft_config = LoraConfig(
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_14
|
tor()
model_name_or_path = "t5-base"
base_path = "temp/data/FinancialPhraseBank-v1.0"
+ peft_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
+ model = get_peft_model(model, peft_config)
Throughout the script, you’ll see the main_process_first and wait_for_everyone functions which h
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_15
|
_peft_model(model, peft_config)
Throughout the script, you’ll see the main_process_first and wait_for_everyone functions which help control and synchronize when processes are executed.
After your dataset is prepared, and all the necessary training components are loaded, the script checks if you’re using the fsdp_plugin. PyTorch offers two ways for wrapping model layers in FSDP, automatically or manually. The simplest method is to allow FSDP to
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_16
|
. PyTorch offers two ways for wrapping model layers in FSDP, automatically or manually. The simplest method is to allow FSDP to automatically recursively wrap model layers without changing any other code. You can choose to wrap the model layers based on the layer name or on the size (number of parameters). In the FSDP configuration file, it uses the TRANSFORMER_BASED_WRAP option to wrap the T5Block layer.
Copied
if getattr(accelerator.state,
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_17
|
configuration file, it uses the TRANSFORMER_BASED_WRAP option to wrap the T5Block layer.
Copied
if getattr(accelerator.state, "fsdp_plugin", None) is not None:
accelerator.state.fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(model)
Next, use 🤗 Accelerate’s prepare function to prepare the model, datasets, optimizer, and scheduler for training.
Copied
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler = accelerator.
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_18
|
ptimizer, and scheduler for training.
Copied
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler
)
From here, the remainder of the script handles the training loop, evaluation, and sharing your model to the Hub.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to fsdp_config.yam
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_19
|
.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to fsdp_config.yaml, so you’ll need to pass the path to the launcher with the --config_file argument like this:
Copied
accelerate launch --config_file fsdp_config.yaml examples/peft_lora_seq2seq_accelerate_fsdp.py
Once training is complete, the script returns the accuracy and compares the predictions to the labels.
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
484c666f84643d3c63c5dbcc5c8cccea.txt_chunk_20
|
sdp.py
Once training is complete, the script returns the accuracy and compares the predictions to the labels.
|
484c666f84643d3c63c5dbcc5c8cccea.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_1
|
LoRA
This conceptual guide gives a brief overview of LoRA, a technique that accelerates
the fine-tuning of large models while consuming less memory.
To make fine-tuning more efficient, LoRA’s approach is to represent the weight updates with two smaller
matrices (called update matrices) through low-rank decomposition. These new matrices can be trained to adapt to the
new data while keeping the overall number of changes low. The original weig
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_2
|
n. These new matrices can be trained to adapt to the
new data while keeping the overall number of changes low. The original weight matrix remains frozen and doesn’t receive
any further adjustments. To produce the final results, both the original and the adapted weights are combined.
This approach has a number of advantages:
LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.
The original pre-traine
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_3
|
ages:
LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.
The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
LoRA is orthogonal to many other parameter-efficient methods and can be combined with many of them.
Performance of models fine-tuned using LoRA is comparable to the perfor
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_4
|
efficient methods and can be combined with many of them.
Performance of models fine-tuned using LoRA is comparable to the performance of fully fine-tuned models.
LoRA does not add any inference latency because adapter weights can be merged with the base model.
In principle, LoRA can be applied to any subset of weight matrices in a neural network to reduce the number of trainable
parameters. However, for simplicity and further parameter efficien
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_5
|
atrices in a neural network to reduce the number of trainable
parameters. However, for simplicity and further parameter efficiency, in Transformer models LoRA is typically applied to
attention blocks only. The resulting number of trainable parameters in a LoRA model depends on the size of the low-rank
update matrices, which is determined mainly by the rank r and the shape of the original weight matrix.
Merge LoRA weights into the base model
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_6
|
which is determined mainly by the rank r and the shape of the original weight matrix.
Merge LoRA weights into the base model
While LoRA is significantly smaller and faster to train, you may encounter latency issues during inference due to separately loading the base model and the LoRA model. To eliminate latency, use the merge_and_unload() function to merge the adapter weights with the base model which allows you to effectively use the newly
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_7
|
e the merge_and_unload() function to merge the adapter weights with the base model which allows you to effectively use the newly merged model as a standalone model.
This works because during training, the smaller weight matrices (A and B in the diagram above) are separate. But once training is complete, the weights can actually be merged into a new weight matrix that is identical.
Utils for LoRA
Use merge_adapter() to merge the LoRa layers
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_8
|
n actually be merged into a new weight matrix that is identical.
Utils for LoRA
Use merge_adapter() to merge the LoRa layers into the base model while retaining the PeftModel.
This will help in later unmerging, deleting, loading different adapters and so on.
Use unmerge_adapter() to unmerge the LoRa layers from the base model while retaining the PeftModel.
This will help in later merging, deleting, loading different adapters and so on.
Use u
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_9
|
base model while retaining the PeftModel.
This will help in later merging, deleting, loading different adapters and so on.
Use unload() to get back the base model without the merging of the active lora modules.
This will help when you want to get back the pretrained base model in some applications when you want to reset the model to its original state.
For example, in Stable Diffusion WebUi, when the user wants to infer with base model post try
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_10
|
t the model to its original state.
For example, in Stable Diffusion WebUi, when the user wants to infer with base model post trying out LoRAs.
Use delete_adapter() to delete an existing adapter.
Use add_weighted_adapter() to combine multiple LoRAs into a new adapter based on the user provided weighing scheme.
Common LoRA parameters in PEFT
As with other methods supported by PEFT, to fine-tune a model using LoRA, you need to:
Instantiate a ba
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_11
|
oRA parameters in PEFT
As with other methods supported by PEFT, to fine-tune a model using LoRA, you need to:
Instantiate a base model.
Create a configuration (LoraConfig) where you define LoRA-specific parameters.
Wrap the base model with get_peft_model() to get a trainable PeftModel.
Train the PeftModel as you normally would train the base model.
LoraConfig allows you to control how LoRA is applied to the base model through the following pa
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_12
|
ally would train the base model.
LoraConfig allows you to control how LoRA is applied to the base model through the following parameters:
r: the rank of the update matrices, expressed in int. Lower rank results in smaller update matrices with fewer trainable parameters.
target_modules: The modules (for example, attention blocks) to apply the LoRA update matrices.
alpha: LoRA scaling factor.
bias: Specifies if the bias parameters should be trai
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_13
|
ion blocks) to apply the LoRA update matrices.
alpha: LoRA scaling factor.
bias: Specifies if the bias parameters should be trained. Can be 'none', 'all' or 'lora_only'.
modules_to_save: List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. These typically include model’s custom head that is randomly initialized for the fine-tuning task.
layers_to_transform: List of layers to be transformed by LoRA. If
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_14
|
om head that is randomly initialized for the fine-tuning task.
layers_to_transform: List of layers to be transformed by LoRA. If not specified, all layers in target_modules are transformed.
layers_pattern: Pattern to match layer names in target_modules, if layers_to_transform is specified. By default PeftModel will look at common layer pattern (layers, h, blocks, etc.), use it for exotic and custom models.
LoRA examples
For an example of LoR
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_15
|
k at common layer pattern (layers, h, blocks, etc.), use it for exotic and custom models.
LoRA examples
For an example of LoRA method application to various downstream tasks, please refer to the following guides:
Image classification using LoRA
Semantic segmentation
While the original paper focuses on language models, the technique can be applied to any dense layers in deep learning
models. As such, you can leverage this technique with diffu
|
fbfea59dc785a5f35312147c99390e02.txt
|
fbfea59dc785a5f35312147c99390e02.txt_chunk_16
|
s, the technique can be applied to any dense layers in deep learning
models. As such, you can leverage this technique with diffusion models. See Dreambooth fine-tuning with LoRA task guide for an example.
|
fbfea59dc785a5f35312147c99390e02.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_1
|
Fully Sharded Data Parallel
Fully sharded data parallel (FSDP) is developed for distributed training of large pretrained models up to 1T parameters. FSDP achieves this by sharding the model parameters, gradients, and optimizer states across data parallel processes and it can also offload sharded model parameters to a CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
Currently, FSDP does n
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_2
|
CPU. The memory efficiency afforded by FSDP allows you to scale training to larger batch or model sizes.
Currently, FSDP does not confer any reduction in GPU memory usage and FSDP with CPU offload actually consumes 1.65x more GPU memory during training. You can track this PyTorch issue for any updates.
FSDP is supported in 🤗 Accelerate, and you can use it with 🤗 PEFT. This guide will help you learn how to use our FSDP training script. You’ll c
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_3
|
ed in 🤗 Accelerate, and you can use it with 🤗 PEFT. This guide will help you learn how to use our FSDP training script. You’ll configure the script to train a large model for conditional generation.
Configuration
Begin by running the following command to create a FSDP configuration file with 🤗 Accelerate. Use the --config_file flag to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in t
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_4
|
config_file flag to save the configuration file to a specific location, otherwise it is saved as a default_config.yaml file in the 🤗 Accelerate cache.
The configuration file is used to set the default options when you launch the training script.
Copied
accelerate config --config_file fsdp_config.yaml
You’ll be asked a few questions about your setup, and configure the following arguments. For this example, make sure you fully shard the model
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_5
|
a few questions about your setup, and configure the following arguments. For this example, make sure you fully shard the model parameters, gradients, optimizer states, leverage the CPU for offloading, and wrap model layers based on the Transformer layer class name.
Copied
`Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload P
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_6
|
optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD
`Offload Params`: Decides Whether to offload parameters and gradients to CPU
`Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP
`Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_7
|
ng `TRANSFORMER_BASED_WRAP`, user specifies comma-separated string of transformer layer class names (case-sensitive) to wrap ,e.g,
`BertLayer`, `GPTJBlock`, `T5Block`, `BertLayer,BertEmbeddings,BertSelfOutput`...
`Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP`
`Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_8
|
ARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH
`State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT
For example, your FSDP configuration file may look like the following:
Copied
command_file: null
commands: null
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: FSDP
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefe
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_9
|
FSDP
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: true
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: T5Block
gpu_ids: null
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_10
|
k: 0
main_process_ip: null
main_process_port: null
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_name: null
tpu_zone: null
use_cpu: false
The important parts
Let’s dig a bit deeper into the training script to understand how it works.
The main() function begins with initializing an Accelerator class which handles everything for distributed
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_11
|
erstand how it works.
The main() function begins with initializing an Accelerator class which handles everything for distributed training, such as automatically detecting your training environment.
💡 Feel free to change the model and dataset inside the main function. If your dataset format is different from the one in the script, you may also need to write your own preprocessing function.
The script also creates a configuration corresponding t
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_12
|
the script, you may also need to write your own preprocessing function.
The script also creates a configuration corresponding to the 🤗 PEFT method you’re using. For LoRA, you’ll use LoraConfig to specify the task type, and several other important parameters such as the dimension of the low-rank matrices, the matrices scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, replace LoraConfig
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_13
|
scaling factor, and the dropout probability of the LoRA layers. If you want to use a different 🤗 PEFT method, replace LoraConfig with the appropriate class.
Next, the script wraps the base model and peft_config with the get_peft_model() function to create a PeftModel.
Copied
def main():
+ accelerator = Accelerator()
model_name_or_path = "t5-base"
base_path = "temp/data/FinancialPhraseBank-v1.0"
+ peft_config = LoraConfig(
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_14
|
tor()
model_name_or_path = "t5-base"
base_path = "temp/data/FinancialPhraseBank-v1.0"
+ peft_config = LoraConfig(
task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1
)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
+ model = get_peft_model(model, peft_config)
Throughout the script, you’ll see the main_process_first and wait_for_everyone functions which h
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_15
|
_peft_model(model, peft_config)
Throughout the script, you’ll see the main_process_first and wait_for_everyone functions which help control and synchronize when processes are executed.
After your dataset is prepared, and all the necessary training components are loaded, the script checks if you’re using the fsdp_plugin. PyTorch offers two ways for wrapping model layers in FSDP, automatically or manually. The simplest method is to allow FSDP to
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_16
|
. PyTorch offers two ways for wrapping model layers in FSDP, automatically or manually. The simplest method is to allow FSDP to automatically recursively wrap model layers without changing any other code. You can choose to wrap the model layers based on the layer name or on the size (number of parameters). In the FSDP configuration file, it uses the TRANSFORMER_BASED_WRAP option to wrap the T5Block layer.
Copied
if getattr(accelerator.state,
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_17
|
configuration file, it uses the TRANSFORMER_BASED_WRAP option to wrap the T5Block layer.
Copied
if getattr(accelerator.state, "fsdp_plugin", None) is not None:
accelerator.state.fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(model)
Next, use 🤗 Accelerate’s prepare function to prepare the model, datasets, optimizer, and scheduler for training.
Copied
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler = accelerator.
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_18
|
ptimizer, and scheduler for training.
Copied
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler = accelerator.prepare(
model, train_dataloader, eval_dataloader, optimizer, lr_scheduler
)
From here, the remainder of the script handles the training loop, evaluation, and sharing your model to the Hub.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to fsdp_config.yam
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_19
|
.
Train
Run the following command to launch the training script. Earlier, you saved the configuration file to fsdp_config.yaml, so you’ll need to pass the path to the launcher with the --config_file argument like this:
Copied
accelerate launch --config_file fsdp_config.yaml examples/peft_lora_seq2seq_accelerate_fsdp.py
Once training is complete, the script returns the accuracy and compares the predictions to the labels.
|
1f9774ff16513458295e02cc455b5b6b.txt
|
1f9774ff16513458295e02cc455b5b6b.txt_chunk_20
|
sdp.py
Once training is complete, the script returns the accuracy and compares the predictions to the labels.
|
1f9774ff16513458295e02cc455b5b6b.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_1
|
P-tuning for sequence classification
It is challenging to finetune large language models for downstream tasks because they have so many parameters. To work around this, you can use prompts to steer the model toward a particular downstream task without fully finetuning a model. Typically, these prompts are handcrafted, which may be impractical because you need very large validation sets to find the best prompts. P-tuning is a method for automa
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_2
|
, which may be impractical because you need very large validation sets to find the best prompts. P-tuning is a method for automatically searching and optimizing for better prompts in a continuous space.
💡 Read GPT Understands, Too to learn more about p-tuning.
This guide will show you how to train a roberta-large model (but you can also use any of the GPT, OPT, or BLOOM models) with p-tuning on the mrpc configuration of the GLUE benchmark.
Befo
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_3
|
(but you can also use any of the GPT, OPT, or BLOOM models) with p-tuning on the mrpc configuration of the GLUE benchmark.
Before you begin, make sure you have all the necessary libraries installed:
Copied
!pip install -q peft transformers datasets evaluate
Setup
To get started, import 🤗 Transformers to create the base model, 🤗 Datasets to load a dataset, 🤗 Evaluate to load an evaluation metric, and 🤗 PEFT to create a PeftModel and setup
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_4
|
he base model, 🤗 Datasets to load a dataset, 🤗 Evaluate to load an evaluation metric, and 🤗 PEFT to create a PeftModel and setup the configuration for p-tuning.
Define the model, dataset, and some basic training hyperparameters:
Copied
from transformers import (
AutoModelForSequenceClassification,
AutoTokenizer,
DataCollatorWithPadding,
TrainingArguments,
Trainer,
)
from peft import (
get_peft_config,
get_peft_mod
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_5
|
zer,
DataCollatorWithPadding,
TrainingArguments,
Trainer,
)
from peft import (
get_peft_config,
get_peft_model,
get_peft_model_state_dict,
set_peft_model_state_dict,
PeftType,
PromptEncoderConfig,
)
from datasets import load_dataset
import evaluate
import torch
model_name_or_path = "roberta-large"
task = "mrpc"
num_epochs = 20
lr = 1e-3
batch_size = 32
Load dataset and metric
Next, load the mrpc configura
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_6
|
"roberta-large"
task = "mrpc"
num_epochs = 20
lr = 1e-3
batch_size = 32
Load dataset and metric
Next, load the mrpc configuration - a corpus of sentence pairs labeled according to whether they’re semantically equivalent or not - from the GLUE benchmark:
Copied
dataset = load_dataset("glue", task)
dataset["train"][0]
{
"sentence1": 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_7
|
[0]
{
"sentence1": 'Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .',
"sentence2": 'Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .',
"label": 1,
"idx": 0,
}
From 🤗 Evaluate, load a metric for evaluating the model’s performance. The evaluation module returns the accuracy and F1 scores associated with this speci
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_8
|
tric for evaluating the model’s performance. The evaluation module returns the accuracy and F1 scores associated with this specific task.
Copied
metric = evaluate.load("glue", task)
Now you can use the metric to write a function that computes the accuracy and F1 scores. The compute_metric function calculates the scores from the model predictions and labels:
Copied
import numpy as np
def compute_metrics(eval_pred):
predictions, label
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_9
|
res from the model predictions and labels:
Copied
import numpy as np
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels)
Preprocess dataset
Initialize the tokenizer and configure the padding token to use. If you’re using a GPT, OPT, or BLOOM model, you should set the padding_side to the left; otherwise i
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_10
|
e the padding token to use. If you’re using a GPT, OPT, or BLOOM model, you should set the padding_side to the left; otherwise it’ll be set to the right. Tokenize the sentence pairs and truncate them to the maximum length.
Copied
if any(k in model_name_or_path for k in ("gpt", "opt", "bloom")):
padding_side = "left"
else:
padding_side = "right"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side)
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_11
|
eft"
else:
padding_side = "right"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side=padding_side)
if getattr(tokenizer, "pad_token_id") is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
def tokenize_function(examples):
# max_length=None => use the model max length (it's actually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_12
|
tually the default)
outputs = tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, max_length=None)
return outputs
Use map to apply the tokenize_function to the dataset, and remove the unprocessed columns because the model won’t need those. You should also rename the label column to labels because that is the expected name for the labels by models in the 🤗 Transformers library.
Copied
tokenized_datasets = dataset.
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_13
|
ecause that is the expected name for the labels by models in the 🤗 Transformers library.
Copied
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
remove_columns=["idx", "sentence1", "sentence2"],
)
tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
Create a collator function with DataCollatorWithPadding to pad the examples in the batches to the longest sequence in the batch:
Copied
data_
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_14
|
r function with DataCollatorWithPadding to pad the examples in the batches to the longest sequence in the batch:
Copied
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding="longest")
Train
P-tuning uses a prompt encoder to optimize the prompt parameters, so you’ll need to initialize the PromptEncoderConfig with several arguments:
task_type: the type of task you’re training on, in this case it is sequence classification or
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_15
|
oderConfig with several arguments:
task_type: the type of task you’re training on, in this case it is sequence classification or SEQ_CLS
num_virtual_tokens: the number of virtual tokens to use, or in other words, the prompt
encoder_hidden_size: the hidden size of the encoder used to optimize the prompt parameters
Copied
peft_config = PromptEncoderConfig(task_type="SEQ_CLS", num_virtual_tokens=20, encoder_hidden_size=128)
Create the base robe
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_16
|
pied
peft_config = PromptEncoderConfig(task_type="SEQ_CLS", num_virtual_tokens=20, encoder_hidden_size=128)
Create the base roberta-large model from AutoModelForSequenceClassification, and then wrap the base model and peft_config with get_peft_model() to create a PeftModel. If you’re curious to see how many parameters you’re actually training compared to training on all the model parameters, you can print it out with print_trainable_parameters(
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_17
|
you’re actually training compared to training on all the model parameters, you can print it out with print_trainable_parameters():
Copied
model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path, return_dict=True)
model = get_peft_model(model, peft_config)
model.print_trainable_parameters()
"trainable params: 1351938 || all params: 355662082 || trainable%: 0.38011867680626127"
From the 🤗 Transformers library, set up the
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_18
|
inable params: 1351938 || all params: 355662082 || trainable%: 0.38011867680626127"
From the 🤗 Transformers library, set up the TrainingArguments class with where you want to save the model to, the training hyperparameters, how to evaluate the model, and when to save the checkpoints:
Copied
training_args = TrainingArguments(
output_dir="your-name/roberta-large-peft-p-tuning",
learning_rate=1e-3,
per_device_train_batch_size=32,
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_19
|
rguments(
output_dir="your-name/roberta-large-peft-p-tuning",
learning_rate=1e-3,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
num_train_epochs=2,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)
Then pass the model, TrainingArguments, datasets, tokenizer, data collator, and evaluation function to the Trainer class, which’ll handle the ent
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_20
|
el, TrainingArguments, datasets, tokenizer, data collator, and evaluation function to the Trainer class, which’ll handle the entire training loop for you. Once you’re ready, call train to start training!
Copied
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=comp
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_21
|
eval_dataset=tokenized_datasets["test"],
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
Share model
You can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted:
Copied
from huggingface_hub import notebook_login
notebook_login()
Upload the model to a specifc model repository on the Hub with the pu
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_22
|
om huggingface_hub import notebook_login
notebook_login()
Upload the model to a specifc model repository on the Hub with the push_to_hub function:
Copied
model.push_to_hub("your-name/roberta-large-peft-p-tuning", use_auth_token=True)
Inference
Once the model has been uploaded to the Hub, anyone can easily use it for inference. Load the configuration and model:
Copied
import torch
from peft import PeftModel, PeftConfig
from transformer
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_23
|
for inference. Load the configuration and model:
Copied
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "smangrul/roberta-large-peft-p-tuning"
config = PeftConfig.from_pretrained(peft_model_id)
inference_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_24
|
assification.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(inference_model, peft_model_id)
Get some text and tokenize it:
Copied
classes = ["not equivalent", "equivalent"]
sentence1 = "Coast redwood trees are the tallest trees on the planet and can grow over 300 feet tall."
sentence2 = "The coast redwood trees, which can attain a he
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_25
|
re the tallest trees on the planet and can grow over 300 feet tall."
sentence2 = "The coast redwood trees, which can attain a height of over 300 feet, are the tallest trees on earth."
inputs = tokenizer(sentence1, sentence2, truncation=True, padding="longest", return_tensors="pt")
Pass the inputs to the model to classify the sentences:
Copied
with torch.no_grad():
outputs = model(**inputs).logits
print(outputs)
paraphrased_text = t
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
0971e5cdd64148ba79c14e2604a7b957.txt_chunk_26
|
ify the sentences:
Copied
with torch.no_grad():
outputs = model(**inputs).logits
print(outputs)
paraphrased_text = torch.softmax(outputs, dim=1).tolist()[0]
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrased_text[i] * 100))}%")
"not equivalent: 4%"
"equivalent: 96%"
|
0971e5cdd64148ba79c14e2604a7b957.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_1
|
DreamBooth fine-tuning with LoRA
This guide demonstrates how to use LoRA, a low-rank approximation technique, to fine-tune DreamBooth with the
CompVis/stable-diffusion-v1-4 model.
Although LoRA was initially designed as a technique for reducing the number of trainable parameters in
large-language models, the technique can also be applied to diffusion models. Performing a complete model fine-tuning
of diffusion models is a time-consuming task
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_2
|
ue can also be applied to diffusion models. Performing a complete model fine-tuning
of diffusion models is a time-consuming task, which is why lightweight techniques like DreamBooth or Textual Inversion
gained popularity. With the introduction of LoRA, customizing and fine-tuning a model on a specific dataset has become
even faster.
In this guide we’ll be using a DreamBooth fine-tuning script that is available in
PEFT’s GitHub repo. Feel free t
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_3
|
e
even faster.
In this guide we’ll be using a DreamBooth fine-tuning script that is available in
PEFT’s GitHub repo. Feel free to explore it and
learn how things work.
Set up your environment
Start by cloning the PEFT repository:
Copied
git clone https://github.com/huggingface/peft
Navigate to the directory containing the training scripts for fine-tuning Dreambooth with LoRA:
Copied
cd peft/examples/lora_dreambooth
Set up your environ
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_4
|
aining the training scripts for fine-tuning Dreambooth with LoRA:
Copied
cd peft/examples/lora_dreambooth
Set up your environment: install PEFT, and all the required libraries. At the time of writing this guide we recommend
installing PEFT from source.
Copied
pip install -r requirements.txt
pip install git+https://github.com/huggingface/peft
Fine-tuning DreamBooth
Prepare the images that you will use for fine-tuning the model. Set up
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_5
|
s://github.com/huggingface/peft
Fine-tuning DreamBooth
Prepare the images that you will use for fine-tuning the model. Set up a few environment variables:
Copied
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="path-to-instance-images"
export CLASS_DIR="path-to-class-images"
export OUTPUT_DIR="path-to-save-model"
Here:
INSTANCE_DIR: The directory containing the images that you intend to use for training your model
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_6
|
DIR="path-to-save-model"
Here:
INSTANCE_DIR: The directory containing the images that you intend to use for training your model.
CLASS_DIR: The directory containing class-specific images. In this example, we use prior preservation to avoid overfitting and language-drift. For prior preservation, you need other images of the same class as part of the training process. However, these images can be generated and the training script will save them
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_7
|
f the same class as part of the training process. However, these images can be generated and the training script will save them to a local path you specify here.
OUTPUT_DIR: The destination folder for storing the trained model’s weights.
To learn more about DreamBooth fine-tuning with prior-preserving loss, check out the Diffusers documentation.
Launch the training script with accelerate and pass hyperparameters, as well as LoRa-specific argume
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_8
|
he Diffusers documentation.
Launch the training script with accelerate and pass hyperparameters, as well as LoRa-specific arguments to it such as:
use_lora: Enables LoRa in the training script.
lora_r: The dimension used by the LoRA update matrices.
lora_alpha: Scaling factor.
lora_text_encoder_r: LoRA rank for text encoder.
lora_text_encoder_alpha: LoRA alpha (scaling factor) for text encoder.
Here’s what the full set of script arguments may
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_9
|
encoder.
lora_text_encoder_alpha: LoRA alpha (scaling factor) for text encoder.
Here’s what the full set of script arguments may look like:
Copied
accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--class_data_dir=$CLASS_DIR \
--output_dir=$OUTPUT_DIR \
--train_text_encoder \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="a photo of
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_10
|
dir=$OUTPUT_DIR \
--train_text_encoder \
--with_prior_preservation --prior_loss_weight=1.0 \
--instance_prompt="a photo of sks dog" \
--class_prompt="a photo of dog" \
--resolution=512 \
--train_batch_size=1 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--num_class_images=200 \
--use_lora \
--lora_r 16 \
--lora_alpha 27 \
--lora_text_encoder_r 16 \
--lora_text_encoder_alpha 17 \
--learning_rate=1e-4 \
--gra
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_11
|
--lora_r 16 \
--lora_alpha 27 \
--lora_text_encoder_r 16 \
--lora_text_encoder_alpha 17 \
--learning_rate=1e-4 \
--gradient_accumulation_steps=1 \
--gradient_checkpointing \
--max_train_steps=800
Inference with a single adapter
To run inference with the fine-tuned model, first specify the base model with which the fine-tuned LoRA weights will be combined:
Copied
import os
import torch
from diffusers import StableDiffusionPi
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_12
|
th which the fine-tuned LoRA weights will be combined:
Copied
import os
import torch
from diffusers import StableDiffusionPipeline
from peft import PeftModel, LoraConfig
MODEL_NAME = "CompVis/stable-diffusion-v1-4"
Next, add a function that will create a Stable Diffusion pipeline for image generation. It will combine the weights of
the base model with the fine-tuned LoRA weights using LoraConfig.
Copied
def get_lora_sd_pipeline(
ckp
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
e8dfa1c467776183e9b66635fd4136c1.txt_chunk_13
|
ine the weights of
the base model with the fine-tuned LoRA weights using LoraConfig.
Copied
def get_lora_sd_pipeline(
ckpt_dir, base_model_name_or_path=None, dtype=torch.float16, device="cuda", adapter_name="default"
):
unet_sub_dir = os.path.join(ckpt_dir, "unet")
text_encoder_sub_dir = os.path.join(ckpt_dir, "text_encoder")
if os.path.exists(text_encoder_sub_dir) and base_model_name_or_path is None:
config = LoraCon
|
e8dfa1c467776183e9b66635fd4136c1.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.