llama-2-13b-chat-hf-gptq / tokenizer.json
seonglae's picture
build: AutoGPTQ for meta-llama/Llama-2-13b-chat-hf: 4bits, gr128, desc_act=False
671a089
File too large to display, you can check the raw version instead.