Text Generation
Transformers
PyTorch
English
llama
Inference Endpoints
text-generation-inference

Inconsistency with end of string token

#4
by cvdbdo - opened

When generating text I get either or <|end_of_turn|> as EOS token. With <|end_of_turn|> the generation doesn't stop.

OpenOrca org

Can you provide more details on the execution environment? Which prompt format are you using? We’ve only tested with the one from OpenOrca model.

cvdbdo changed discussion title from Inconsistency with end of strins token to Inconsistency with end of string token

I use Alpaca Instruct format, with a Open-Orca/OpenOrca-Platypus2-13B fine tuned on a specialized instruct dataset. The behaviour described is common to the base and fine tuned models. It happens with or without quantization (4 & 8 bits). I load them with a simple AutoModelForCausalLM.

generation_config = GenerationConfig(
temperature=.0001,
top_p=0,
top_k=0,
repetition_penalty=1,
)

The problem is mainly a performance one because the model keeps generating after the eos token.

How to fine tune this model can anyone please connect with me and help me, I want to learn how to fine tune this data?
Email: darshankholakiya12@gmail.com
LinkedIn: https://www.linkedin.com/in/darshankholakiya/

I use Alpaca Instruct format, with a Open-Orca/OpenOrca-Platypus2-13B fine tuned on a specialized instruct dataset. The behaviour described is common to the base and fine tuned models. It happens with or without quantization (4 & 8 bits). I load them with a simple AutoModelForCausalLM.

generation_config = GenerationConfig(
temperature=.0001,
top_p=0,
top_k=0,
repetition_penalty=1,
)

The problem is mainly a performance one because the model keeps generating after the eos token.

Set the end to turn token as stop token in GenerationConfig. If you don't know the token id then simple encode the <|end_of_turn|> using the tokenizer you will get the id then set stop token to this id.

Sign up or log in to comment