Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
Inference Endpoints

[AUTOMATED] Model Memory Requirements

#2
by model-sizer-bot - opened

Model Memory Requirements

You will need about {'dtype': 'float16/bfloat16', 'Largest Layer or Residual Group': '125.01 MB', 'Total Size': '1.95 GB', 'Training using Adam (Peak vRAM)': {'model': 4184206080, 'optimizer': 8368412160, 'gradients': 6276309120, 'step': 8368412160}} VRAM to load this model for inference, and {'dtype': 'int4', 'Largest Layer or Residual Group': '31.25 MB', 'Total Size': '498.8 MB', 'Training using Adam (Peak vRAM)': {'model': -1, 'optimizer': -1, 'gradients': -1, 'step': -1}} VRAM to train it using Adam.

These calculations were measured from the Model Memory Utility Space on the Hub.

The minimum recommended vRAM needed for this model assumes using Accelerate or device_map="auto" and is denoted by the size of the "largest layer".
When performing inference, expect to add up to an additional 20% to this, as found by EleutherAI. More tests will be performed in the future to get a more accurate benchmark for each model.

When training with Adam, you can expect roughly 4x the reported results to be used. (1x for the model, 1x for the gradients, and 2x for the optimizer).

Results:

dtype Largest Layer or Residual Group Total Size Training using Adam (Peak vRAM)
float32 250.02 MB 3.9 GB {'model': 4184206080, 'optimizer': 8368412160, 'gradients': 4184206080, 'step': 16736824320}
float16/bfloat16 125.01 MB 1.95 GB {'model': 4184206080, 'optimizer': 8368412160, 'gradients': 6276309120, 'step': 8368412160}
int8 62.5 MB 997.59 MB {'model': -1, 'optimizer': -1, 'gradients': -1, 'step': -1}
int4 31.25 MB 498.8 MB {'model': -1, 'optimizer': -1, 'gradients': -1, 'step': -1}

Sign up or log in to comment