Can you please provide the quantize_config.json file?

#1
by ishotoli - opened

I am able to load and run the GPTQ version in the text-generation-webui, but when I try to run lm-evaluation-harness, I receive an error message indicating that the file or directory "quantize_config.json" does not exist. Could you please provide me with this file so that I can perform the necessary testing? Thank you.

Caldera AI org

This one is for 0cc4m's KoboldAI fork so we have no quantize_config.json, from the filename I can deduct 4-bit with no groupsize was used if that helps.

As there is not qunatize_config.json, can you provide us some sample code on how to load it then?

For example, the quantize_config.json in TheBloke/Nous-Hermes-13B-GPTQ:
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": false,
"sym": true,
"true_sequential": true
}
TheBloke/guanaco-65B-GPTQ:
{
"bits": 4,
"group_size": -1,
"damp_percent": 0.01,
"desc_act": true,
"sym": true,
"true_sequential": true
}
I'm uncertain if this file was manually created or automatically generated using a quantization tool. Thx

Haven't myself, but have you tried this one yet? Mentioned compatible with Text-Generation-UI

https://huggingface.co/Yhyu13/30B-Lazarus-gptq-4bit

^ could not get the above one working either....choose 4 bit, group128 and LLAMA type and I get this when trying to chat:

image.png

Sign up or log in to comment