Is it possible to provide a quantitative model of the GPTQ

#1
by warlock-edward - opened

I and expect to run this model, but my V100 video card doesn't support AWQ quantization, so expect you to provide a GPTQ quantization model, and if you can expect it to be an 8BIT GPTQ, very impressed!

Unfortunately, the AutoGPT doesn't support this model. If you want you can open an issue here and once they added the support I can quantize it into GPTQ: https://github.com/AutoGPTQ/AutoGPTQ

Sign up or log in to comment