Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision

Updated special_tokens_map.json

#22
by brandonglockaby - opened

Ooba users wouldn't have noticed, but generation is terrible before fixing a mistake in special_tokens_map.json
CarperAI pushed the update
https://huggingface.co/CarperAI/stable-vicuna-13b-delta/blob/main/special_tokens_map.json

OK thanks. I've had people PR this before but I didn't want to change it because a) I couldn't detect any difference with/without the change, and b) I wanted to match the source repo

Now the source repo has changed, I have done the update also.

Can you elaborate on what the difference in generation is? I tested in raw Python as well as in text-generation-webui, and couldn't notice any obvious differences.

Sign up or log in to comment