CUDA out of memory

#10
by Blue-Devil - opened

Hi, there. Do you have suggestions for the minimum GPU RAM size to run this? I am using a 12 G RAM NVIDIA GPU, but I could not run it on my machine. Thank you.

Each parameter occupies approximately 2bytes in fp16 mode, and 1byte in 8bit mode.

So, just to load this model in 8bit, you need 12billion params * 1byte = 12GB approx.

You need at-least 4GB more for inference, total 16GB.

Thank you very much for your reply!

And also @Blue-Devil keep in mind that , while making generations you need to watch out the "parameters" that you pass.

In general, I when used:

temperature=0.9, 
min_length=15,
early_stopping=True,
num_beams=8,
no_repeat_ngram_size=2,
top_k=40, 
top_p=0.7,
max_new_tokens=200,
penalty_alpha=0.6,
use_cache=False,
pad_token_id=tokenizer.eos_token_id)

I get the CUDA OOM errors. There are a couple of reasons to this.

  1. Using high n as num_beams: Here I used 8 which internally keeps track of 8 different possible generations paths. That indeed takesup the memory. For details you would need to refer to some literature like this.
  2. pentalty_alpha: Surprisingly this generation parameter also takes up memory and after passing through a hell of OOM error, I have came to know that this parameter was the culprit.
  3. max_new_tokens: This one is obvious, the more you will try to generate the more memory it will take (and obviously time)

So, my suggestion is to only play with a couple of generation parameters if you have limited resources. The simple ones like:

  1. Temperature
  2. Top k
  3. Top p
  4. no repeat ngram size
  5. length penalty
  6. repetition penalty
    etc.

Hope it helps πŸ€—

Sign up or log in to comment