Text Generation
Transformers
PyTorch
English
llama
causal-lm
text-generation-inference
Inference Endpoints

Wrong bos_token in special_tokens_map.json

#6
by brandonglockaby - opened

Ooba ends up fixing it by trimming and such, but using the model with Transformers, this is causing EOS prepended to the input sequence, and it makes the generation terrible

Quickly updated! Thanks for your amazing work!

brandonglockaby changed discussion status to closed

Sign up or log in to comment