How to create prompts with more than 1000 tokens?

#92
by rexer3000 - opened

I am doing a project where I need to feed Bloom more than 1000 tokens. is there a paid API where I can have a higher token limit?

Is there a problem like BLOOM doesn't work or you just need to look back further? Only the first problem can be solved. Maximum token input amount is the model's itself limitation.

BigScience Workshop org

No there's currently no paid API AFAIK cc @Narsil . I think the reasoning is that it cost quite a bit of compute to host this model and we wanted to make it accessible to everyone, which does require us to prevent people for asking too long sequences. You can't launch a local instance of the model? If not we might think of increasing the cap.

I think I saw something paid that just makes the API faster, like switch to paid GPU stuff or whatever.

If you can't input enough data just cut it. Here's how to do it in Python3
data = ' '.join(data.split()[-1000:])
simplest way as I think.

BigScience Workshop org

Hi @ierhon ,

If you are willing to get a paid service for Bloom and get specific options please email api-inference@huggingface.co . But please bear in mind that this is a big model, so having customization on it is probably not going to end up cheap (depends what you have in mind, I just want to manage expectations here).

Why did you say "@ierhon"?

BigScience Workshop org

I think Nicolas meant to tag @rexer3000 :)

At any rate, I think OP's question has been addressed; feel free to re-open otherwise!

christopher changed discussion status to closed
BigScience Workshop org

Yup sorry I misread username (from the wrong line let's say) !

Sign up or log in to comment