ruGPT-13B-4bit / quantize_config.json
gurgutan's picture
AutoGPTQ model for ruGPT-13B: 4bits, gr128, desc_act=False
9c64008
raw
history blame contribute delete
No virus
215 Bytes
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": false,
"sym": true,
"true_sequential": true,
"model_name_or_path": "ruGPT-13B-4bit",
"model_file_base_name": "gptq_model-4bit-128g"
}