Is it very slow , correct?

#8
by carlosbdw - opened

Hi , I run it on a A100 x4 station , and it generates very slow , no matter load_in_4bit=False or True , is it normal?

I'm running a single 3090 and getting between 8 and 9 tokens per second. Not sure what "very slow" means but I would expect A100x4 to be at least double that... Maybe it's not using GPU with the way you have it set up. Are you using oobabooga/text-generation-webui or something else?

same issue. Running with load_in_4bits on single 4090 and is around 2.5tps. I would have considered it is normal and not that bad for my specs. But I am curious w.r.t the comment from giblesnot. 9 tps for a 33B mode on a 3090 sounds really good.

@Chris126
I'm currently using The Bloke's GPTQ quantization, and with the recent improvements to text-gen web ui (and all the related ecosystem) I'm now getting 15+ tokens per second even though I've set a power limit of 250 watts on my 3090. https://huggingface.co/TheBloke/guanaco-33B-GPTQ

image.png

yeah, GPTQ is blazingly fast indeed. But I think did not mention this in your first answer and probably was the reason of the misunderstanding. Running inference directly on guanaco-33b-merged is slow (based on your hardware) But switching to GPTQ version solve the problem.

Anyway, thank you @giblesnot for the precision.

Sign up or log in to comment