llama2-13b-chat-4bit-AWQ / quant_config.json
jamesdborin's picture
Duplicate from TheBloke/Llama-2-13B-chat-AWQ
f9de0be
raw
history blame contribute delete
90 Bytes
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}