gptj-6b-base-4bit-AWQ / quant_config.json
Jamie@TitanML
Upload folder using huggingface_hub
9c8c6fc
raw
history blame contribute delete
67 Bytes
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4
}