Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision

Can Run "gptq_model-4bit--1g" but not "gptq-4bit-32g-actorder_True"

#12
by 0-hero - opened

I am running the models on g5.12xlarge with 96GB VRAM. I can run the 4bit--1g but the 32g-actorder gives me a CUDA out of memory error.

Does the 32g model need a higher VRAM?

Sign up or log in to comment