llama-2-7b-chat-hf-gptq / added_tokens.json
seonglae's picture
build: AutoGPTQ for meta-llama/Llama-2-7b-chat-hf: 4bits, gr128, desc_act=False
ffcd734
raw
history blame
21 Bytes
{
"<pad>": 32000
}