Finetune Mistral 7B full parameters without LORA

#131
by HuggingPanda - opened

Hi everyone,
was searching for a way to fine-tune Mistral 7B model on my custom data but all the results where about LORA. i already have the compute power to fine-tune the full model so I don't need LORA so was wondering if there is a script for the whole model fine-tuning available

see https://pytorch.org/tutorials/intermediate/FSDP_adavnced_tutorial.html?highlight=transformer

now Transformers has integrated FSDP into it's Trainer, what you need to do is to specify related FSDP arguments:
training_args = transformers.trainer.TrainingArguments(
...,
fsdp='shard_grad_op auto_wrap offload',
fsdp_config='fsdp_config.json',
...
)

where fsdp_config.json is json configuration file. For mistral it looks like below:
{
"backward_prefetch": "backward_pre",
"transformer_layer_cls_to_wrap": "MistralDecoderLayer"
}

on a machine with 8 x 40G gpus, it works with micro batch size of 4

this worked with me 4 A10G
i have used fsdp
batch 1

https://gist.github.com/lewtun/b9d46e00292d9ecdd6fd9628d53c2814

Sign up or log in to comment