gpt2-largeGPTQ / quantize_config.json
pavfi-at-m's picture
GPT2gpt2-large: 4bits, gr128, desc_act=False
f115163
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": false,
"static_groups": false,
"sym": true,
"true_sequential": true,
"model_name_or_path": "gpt2-largeGPTQ",
"model_file_base_name": "gptq_model-4bit-128g"
}