Severe repetition when input length is longer than 7k token

#1
by zefanwang - opened

Hi, I really appreciate the work. I am taking an inference with LlongOrca-13B-16k( with extremely low temperature and top_p) on long text. However, I've noticed severe repetition on the output.
I'm curious to know if the finetuning data from the Orca dataset was concatenated to long samples. If not, this behavior might be expected.
Additionally, it would be greatly appreciated if more insights into the training process could be provided.

Sign up or log in to comment