Llama-3-8B not giving the entire outcome in Google Colab

#55
by sayanroy07 - opened

Hi Brilliant Minds,

I am trying to write some basic Llama-3 codes in Colab, however it seems it's not giving me the entire outcome, getting restricted somewhere, can anyone please highlight, where did i miss to enlarge the token length? Tried with max_new_tokens, no luck though.

image.png

Regards,
Roy

I have same problem

Meta Llama org

The default value for max_new_token is 20. Just update that value (see the warning saying that the model agnostic default was used

ArthurZ changed discussion status to closed

Sign up or log in to comment