Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
gptq

g_idx tensors?

#2
by mgleize - opened

Hi,

I noticed this model doesn't have ".g_idx" tensors, which apparently my inference code is looking for (latest source version of HF's text-generation-inference). Some of your other models do have them (like stable-vicuna-13B-GPTQ). I understand this possibly has something to do with GPTQ-for-LLaMa. Could anyone explain what these do at a high level? Is there a way to bypass this with my code or are these vital?

AutoGPTQ is unfortunately not the option I'd like to retain atm (I'm forced to disable Triton due to hardware bugs and it's very slow).

Btw thank you (TheBloke) for all your work getting these quantized models out so fast, truly a boon :).

Yeah, somehow Text Generation Interface managed to merge a version of GPTQ-for-LLaMa that doesn't support the vast majority of models currently on HF.

I specifically use an older version of GPTQ-for-LLaMa to make my quants because it guaranteed compatibility with the widest range of libraries and UIs - AutoGPTQ, text-generation-webui, KobaldAI, ExLlama. This meant using an older version of the GPTQ format, but all the clients before TGI could support both the older format and the newer format. I did experiment with making GPTQ using the newer format, but then I would get complaints from certain users because their UI couldn't load it.

Now TGI has implemented a version of GPTQ-for-LLaMa that can only load the new format. They could have integrated AutoGPTQ which would have automatically supported all formats. Or they could have implemented the same compatibility code as other clients have. But they did neither.

I will re-evaluate this soon and see what I can do but I can't make any promises.

Very clear, thank you!

do you know which version they update GPTQ for llama? I need to use TGI to make simultaneous requests via api

Sign up or log in to comment