We have a pending autogptq PR that will allow gptq quant of gllm. For the augptq PR to work we need this simple method def/typing fix to resolve compat issues with transformers and autogptq.

Ready gptq quants for testing:

https://huggingface.co/LnL-AI/glm-4-9b-gptq-4bit-qubitium-r1
https://huggingface.co/LnL-AI/glm-4-9b-chat-gptq-4bit-qubitium-r1

Cannot merge
This branch has merge conflicts in the following files:
  • modeling_chatglm.py

Sign up or log in to comment