Is there a way to finetune these AWQ beasts?

#1
by yasmolin - opened

Apparently there is no way to finetune this AWQ beauty and it's for inference only?

Correct, I'm not aware of support for training on AWQ models at this time. For training with quantization your options are BitsandBytes (ie qLoRA), or GPTQ. For training I recommend the Axolotl training framework, which supports both qLoRA and training of GPTQ models.

Now that Transformers supports AWQ, it's theoretically possible that PEFT training support could come in the future.

Tagging @casperhansen (author of AutoAWQ) and @ybelkada (Hugging Face staff, responsible for the Transformers AWQ and GPTQ integration) to make them aware of this request.

Generally, if you are on a tight budget, I would recommend to train quantized models with QLoRA, merge the adapter to base model, then quantize to your preferred quant, e.g. AWQ.

AWQ is not compatible with PEFT yet, and I am not deep enough into the subject of training with quantized models to tell you if AWQ would be better than GPTQ in that scenario.

Indeed I second what @casperhansen said, the recommended workflow is
1- Fine-tune the base model using QLoRA on your target domain
2- Further quantize it with AWQ / GPTQ using tools such as autoawq
3- Deploy the AWQ/GPTQ for faster inference
You can read more about it here: https://huggingface.co/blog/overview-quantization-transformers

@casperhansen I'm looking forward to the day when AWQ and PEFT play nicely together. I'm currently creating several GPTQ-LoRA adapters, one for each of my tasks. That way, I can keep just one GPTQ base model in VRAM at all times and then enable one adapter at a time, depending on where I am in my pipeline. Obviously, I would prefer to be doing this with the superior AWQ method. πŸ™‚

Sign up or log in to comment