How many tokens can this model handle?

#2
by Tonight223 - opened

I find this model working pretty well with different tasks(except for math and coding) and wonder how many tokens it can handle once?

@Amantuer How bad is this model on math or coding? I am just about to run some evaluation CoT on 30B models

@Yhyu13 you could try to ask what is 15 * 5 and add 5 minus 2, or write a snake game in python, you will know what I mean. Models in this scale are all weak at math and coding. Even GPT-3.5 also make mistakes sometimes, at present only GPT-4 can say is capable of math and coding I think.

I don't know the details of this particular model, but I think it's Llama based, so it probably has a 2048 token limit between prompt and completion. Someone correct me if I'm wrong.

Does depend on the length of fine tuning prompt/completion pairs too, as to how well it should perform at different token counts. Some models have an absolute limit of "x" tokens, but if fine-tuned on shorter pairs, they could perform poorly closer to the limit. My guess, given the fine tuning dataset, it's probably good up to 2048 tokens.

Hope this helps the OP, and whoever else is wondering.

Sign up or log in to comment