SFT taking high memory with Transformers (>5x the amount it takes to load model checkpoint )

#29
by vermanic - opened

I am trying to do SFT for a model: bigcode/starcoderbase-1b on 80Gb Gpu machine. (g5.12xlarge)

SFT Dataset: https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k (~800bytes/row * 1000rows) (my dataset is different and bigger than this, but I am trying this for a benchmark)

I can load the model for inferencing with 5GB GPU memory consumed using: AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')

But when I do SFT with

num_train_epochs = 1 

per_device_train_batch_size = 1

per_device_eval_batch_size = 1

but it consumes 30 GB across the SFT. (total: 35)

  1. Does SFT take so much memory over (loading model as a checkpoint and inferencing) ?

  2. And any way I can do this in less memory and more time ? (as I plan to do SFT with a 15B model which takes ~60GB just for loading model checkpoint)

(End goal being using a 15B model, the 4-bit quantised version of model takes 10-12 GB for model checkpoint and additional 50GB for SFT) (can't do SFT over non-quantised as that cause below error)

    return F.dropout(input, self.p, self.training, self.inplace)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/functional.py", line 1252, in dropout
    return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 22.04 GiB total capacity; 20.72 GiB already allocated; 43.12 MiB free; 20.87 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

tried: !export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512' (still same issue)

Sign up or log in to comment