[AUTOMATED] Model Memory Requirements

#35
by model-sizer-bot - opened

Model Memory Requirements

You will need about {'dtype': 'float16/bfloat16', 'Largest Layer or Residual Group': '432.02 MB', 'Total Size': '13.74 GB', 'Training using Adam': '54.98 GB'} VRAM to load this model for inference, and {'dtype': 'int4', 'Largest Layer or Residual Group': '108.0 MB', 'Total Size': '3.44 GB', 'Training using Adam': '13.74 GB'} VRAM to train it using Adam.

These calculations were measured from the Model Memory Utility Space on the Hub.

The minimum recommended vRAM needed for this model assumes using Accelerate or device_map="auto" and is denoted by the size of the "largest layer".
When performing inference, expect to add up to an additional 20% to this, as found by EleutherAI. More tests will be performed in the future to get a more accurate benchmark for each model.

When training with Adam, you can expect roughly 4x the reported results to be used. (1x for the model, 1x for the gradients, and 2x for the optimizer).

Results:

dtype Largest Layer or Residual Group Total Size Training using Adam
float32 864.03 MB 27.49 GB 109.96 GB
float16/bfloat16 432.02 MB 13.74 GB 54.98 GB
int8 216.01 MB 6.87 GB 27.49 GB
int4 108.0 MB 3.44 GB 13.74 GB

model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-alpha", trust_remote_code=True, torch_dtype=torch.int8)
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha", trust_remote_code=True, torch_dtype=torch.int8)
When I change float16 to int8, the model can not run ,do you know why?

Define can not run? Can we get a full trace/what it states for you?

When I run the following code on colab:

!pip install transformers
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-beta", trust_remote_code=True, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta", trust_remote_code=True, torch_dtype=torch.float16)
inputs = tokenizer("do you know the difference btween meter and metre?", return_tensors="pt", return_attention_mask=True)
outputs = model.generate(**inputs, max_length=100, num_beams=1, num_return_sequences=1)
text = tokenizer.batch_decode(outputs)[0]
print(text)
torch.cuda.empty_cache()

system said:
OutOfMemoryError: CUDA out of memory. Tried to allocate 112.00 MiB. GPU 0 has a total capacty of 14.75 GiB of which 30.81 MiB is free. Process 2098 has 14.71 GiB memory in use. Of the allocated memory 14.45 GiB is allocated by PyTorch, and 153.47 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I have two questions:
1,20days ago,I can run on colab properly,but now it show outof memoryerror;
2,when I change torch.float16 to torch.int8, sysytem said:
ValueError: Can't instantiate MistralForCausalLM model under dtype=torch.int8 since it is not a floating point dtype

Sign up or log in to comment