Changing maximum input length

#133
by shipWr3ck - opened

I noticed the generation config sets the max length to 4096 while the model has context length 8K. I want to use this model for summarization task, can I simply modify this max len to say 6K and set the max output tokens to 1K? Would it cause unstable behaviours?

Sign up or log in to comment