Minimum requirements for running inference on 176B model

#195
by gsmoon97 - opened

I am planning to run inference on the 176B model, but I could not find much information, regarding the minimum requirements.
If anyone has experience with this, could you please provide some insights on what is the minimum set-up required to run inference on the 176B model?

Below are my specifications.

  • 4 x A100 40GB GPUs
  • Can allocate up to 10TB of free disk space

Thank you.

If you even looked at the files section, you would see that even the smallest possible size of the 176B model is over 300GB of disk space.

If you even looked at the files section, you would see that even the smallest possible size of the 176B model is over 300GB of disk space.

Thanks for the input! I should have mentioned that I can allocate up to 10TB more space. Changed the original post accordingly.

What is the largest possible size the model could be?

You need to be able to fully load the model into the GPUs, the disk space is not relevant.

So 400GB of GPU Vram....at minimum

BigScience Workshop org
edited Feb 16, 2023

If you load your model in 8bit you can half the GPU memory requirement (200GB needed instead of 400) . Install bitsandbytes and just add load_in_8bit=True when calling from_pretrained

If you load your model in 8bit you can half the GPU memory requirement (200GB needed instead of 400) . Install bitsandbytes and just add load_in_8bit=True when calling from_pretrained

Thank you for the suggestion! May I also ask if there would be any significant effects on the model performance, if I load the model in 8bit?

BigScience Workshop org

You should not observe any performance degradation, check out the paper: https://arxiv.org/abs/2208.07339 or the blogpost about the integration: https://huggingface.co/blog/hf-bitsandbytes-integration

Sign up or log in to comment