End Token fix
#9
by
Lasdem
- opened
I had the issue that the generation did not stop at the end token, because the end token is wrongly configured.
in special_tokens_map.json change line 10 to "content": "<|eot_id|>",
in tokenizer_config.json change line 2055 to "eos_token": "<|eot_id|>",
PS: I am using the vllm server to run the model: vllm/vllm-openai:latest
alvarobartt
changed discussion status to
closed
I have not tested, but I can see that here in tokenizer_config.json line 2055, still contains the wrong token.
This still needs to be fixed :) Maybe just accept the open commit? https://huggingface.co/hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4/discussions/18
In the meantime, is there a workaround to make this run locally properly?