Trouble running on an H100 (80GB PCIe), runs out of memory when loading to cuda device
The runtime appears to run out of memory when executing:
model = LlamaForCausalLM.from_pretrained("ausboss/llama-30b-supercot").to("cuda")
I'm relatively new to HF and LLMs in general, am I missing something obvious in how I'm going about trying to use this model? If I have access to multiple GPUs at once, is there a way to load this model across multiple GPUs?
Any guidance appreciated!
Memory / Disk requirements:
https://github.com/ggerganov/llama.cpp#memorydisk-requirements
Make sure you have enough Memory for 30B parameter models, it's 60 GB if the model isn't quantized. And if it is quantized you're gonna need 19.5 GB.
Keep in mind, this is the minimum memory required for 30B parameter models
Definitely have enough memory for it, have 80GB in fact, but that still appears to not be enough for this model.
No idea if this will be helpful in your case, but I found the other day that I would get OOM errors if I didn't have torch
compiled with GPU support; once I went and re-installed torch via their instructions, those errors went away!
EDIT: Also wanted to say that in case it's truly an issue with not having enough memory, the quantization @ComputroniumDev referred to is explained here, and that also helped me quite a bit! There are some models I'm only able to load in 8 bit, and can't load without it.