How did you train this without going OOM in RAM & VRAM?

#15
by vicplus - opened

Hey Mr. Nous,

I tried finetuning this with a 4xA100 80GB setup + 700 GB of RAM. Loading up the model ate up half the memory for both. trainer.Evaluate made it go OOM due to lack of RAM.

Any idea what could have happened? I'm using transformers & peft to finetune it.

Sincerely,
Cat

NousResearch org

You probably need Flash Attention 2 and either Zero3 offloading or FSDP in order to train this model on a 4xA100 setup. If you are using LoRA, make sure Flash Attention 2 is enabled.

Will it work as-is with a 8xA100 setup? I am using LoRA w/o FA2, so that might be it. Thank you, I will try that out!

NousResearch org

Will it work as-is with a 8xA100 setup? I am using LoRA w/o FA2, so that might be it. Thank you, I will try that out!

For 8bit lora without flash attention it wont matter because it just does DDP. A full finetune will work on 8x 80gb or a qlora with flash attention should

also be sure to enable gradient checkpointing

teknium changed discussion status to closed

Sign up or log in to comment