finetuning 2B model taking more gpu than 7B parameter model

#25
by navanit - opened

Hi,
so while training the phi-2, I didn't get it why finetuning phi -2 which is just 2B taking more GPU than 7B models like mistral 7b or Llama 2 7b.

Below is the code snippets using QLORA.
'''
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)

model = transformers.AutoModelForCausalLM.from_pretrained(
base_model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',

)

config = LoraConfig(
r=64,
lora_alpha=16,
target_modules=[
'Wqkv',
'out_proj'
],
bias="none",
lora_dropout=0.05, # Conventional
task_type="CAUSAL_LM",
)

training_arguments = TrainingArguments(
output_dir= output_dir,
num_train_epochs= 2,
# max_steps= 1800,
per_device_train_batch_size= 2,
gradient_accumulation_steps= 1,
optim="paged_adamw_32bit",
save_strategy="epoch",
logging_steps=100,
logging_strategy="steps",
learning_rate= 2e-4,
fp16= False,
bf16= True,
group_by_length= True,
disable_tqdm=False,
report_to="tensorboard",
)

trainer = SFTTrainer(
model=model,
peft_config=config,
dataset_text_field="train",
train_dataset=dataset,
max_seq_length=2048,
tokenizer=tokenizer,
args=training_arguments,
packing=False,
)
'''

Yep, same here. you need to use batch size 1 and gradiant checkint >1 to even the deal.

@Yhyu13 after changing the batch size also it doesn't happens anything extra and regarding gradient checkpoint isn;t that False?
if you can share your training code it will be helpful

@Navanit-shorthills I am using LLaMA_Factory which uses native hf transformer or accelerate for training

#!/bin/bash

eval "$(conda shell.bash hook)"
conda activate llama_factory

MODEL_NAME=phi-2
STAGE=sft
EPOCH=.01 #3.0
DATA=alpaca_gpt4_zh
SAVE_PATH=./models/$STAGE/$MODEL_NAME-$STAGE-$DATA-$EPOCH
SAVE_PATH_PREDICT=$SAVE_PATH/Predict
MODEL_PATH=./models/$MODEL_NAME
LoRA_TARGET=Wqkv #q_proj,v_proj
TEMPLATE=default
PREDICTION_SAMPLES=20

if [ ! -d $MODEL_PATH ]; then
    echo "Model not found: $MODEL_PATH"
    return 1
fi

if [ ! -d $SAVE_PATH ]; then
    mkdir -p $SAVE_PATH
fi

if [ ! -d $SAVE_PATH_PREDICT ]; then
    mkdir -p $SAVE_PATH_PREDICT
fi

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --seed 42 \
    --stage $STAGE \
    --model_name_or_path $MODEL_PATH \
    --dataset $DATA \
    --val_size .1 \
    --template $TEMPLATE \
    --finetuning_type lora \
    --do_train \
    --lora_target $LoRA_TARGET \
    --output_dir $SAVE_PATH \
    --overwrite_output_dir \
    --overwrite_cache \
    --per_device_train_batch_size 1 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs $EPOCH \
    --do_eval \
    --evaluation_strategy steps \
    --per_device_eval_batch_size 1 \
    --prediction_loss_only \
    --plot_loss \
    --quantization_bit 4 \
    |& tee $SAVE_PATH/train_eval_log.txt

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage $STAGE \
    --model_name_or_path $MODEL_PATH \
    --do_predict \
    --max_samples $PREDICTION_SAMPLES \
    --predict_with_generate \
    --dataset $DATA \
    --template $TEMPLATE \
    --finetuning_type lora \
    --adapter_name_or_path $SAVE_PATH \
    --output_dir $SAVE_PATH_PREDICT \
    --per_device_eval_batch_size 1 \
    |& tee $SAVE_PATH_PREDICT/predict_log.txt

Have you been able to figure out why this is happening?

Microsoft org

This could be related to not having gradient checkpointing implemented.

gugarosa changed discussion status to closed

@gugarosa Gradient checkpointing will probably improve ~20% . This seems to be more fundamental issue. I still have the same issue. I would appreciate it if you guys can find a fix

Sign up or log in to comment