Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints

How to parse the output And how to evaluate

#1
by kevinpro - opened

Since the model set <|im_end|> as the special token

if i skip_special_tokens=True, i can't get the output by split("<|im_end|>")[-2]
if i skip_special_tokens=False, i can will observe a lot of padding token decoded.

Whats more, i notice that in data, the answer is not well formated.
When i try to evluate your model, i can't extract the answer and do comparion with reference?

Sign up or log in to comment