llama-2-7b-ko-auto-gptq-full-v2 / special_tokens_map.json
WGNW's picture
AutoGPTQ model for llama-2-7b-ko-full: 4 bits
578d554
raw
history blame
72 Bytes
{
"bos_token": "<s>",
"eos_token": "</s>",
"unk_token": "<unk>"
}