FIX transformers compat
#28
by
Qubitium
- opened
We have a pending autogptq PR that will allow gptq quant of gllm. For the augptq PR to work we need this simple method def/typing fix to resolve compat issues with transformers and autogptq.
Ready gptq quants for testing:
https://huggingface.co/LnL-AI/glm-4-9b-gptq-4bit-qubitium-r1
https://huggingface.co/LnL-AI/glm-4-9b-chat-gptq-4bit-qubitium-r1
Qubitium
changed pull request title from
FIX autogptq compat
to FIX transformers compat