Command for finetuning in a multi gpu machine

#3
by jlopez-dl - opened

Hi,
I have a 4xA100 (80GB) but I don't manage to fine-tune it. I'm suffering some OOM.

Could you please share your command?

I'm using this one.

PYTORCH_CUDA_ALLOC_CONF="max_split_size_mb:256" torchrun --nnodes=1 --nproc_per_node=4 --master_port=3333
finetune.py
--base_model "decapoda-research/llama-30b-hf"
--data_path './alpaca_data.json'
--output_dir './lora-alpaca-30b-multi-gpu'
--batch_size 128
--micro_batch_size 4
--num_epochs 10
--cutoff_len 256
--val_set_size 0
--lora_r 8
--lora_alpha 16
--lora_dropout 0.05
--lora_target_modules ['q_proj','k_proj','v_proj','o_proj']
--group_by_length

Sign up or log in to comment