Finetune Falcon-4b with large token size.

#44
by amnasher - opened

Hello I wanted to ask my max token length is 4k + tokens can I finetune the 40-b instruct/ 7b- instruct model for my own data since the max seq length mentioned is 2048?

I do have the same issue, I am trying to create a chatgpt kind of. But the falcon40b instruct do not contextualize at all.

It act like it is not have any memory of the previous prompt.

Sign up or log in to comment