llama-2-7b-chat-hf-gptq / tokenizer.json
seonglae's picture
build: AutoGPTQ for meta-llama/Llama-2-7b-chat-hf: 4bits, gr128, desc_act=False
ffcd734
File too large to display, you can check the raw version instead.