What is the max sequence length that model can compute if I use flash attention?

#20
by halfmoon039 - opened

I'm confused by the problem that all these open-source models like Llama, Gemma or Grok, the max_position_embeddings setting is both 8192, that is, the max sequence length the model can compute is 8192, I think it's not long enough to satisfied users' requirement. Is it the reason that why ChatGPT can exceed most language model?
Besides, I've noticed that when model compute self attention scores, it can enable flash attention method to accelerate computation speed, and this method also can extend max sequence length that model can accept, so I'm wondering that what is the max sequence length that model can compute if I use flash attention?
thanks.

Google org

Hi @halfmoon039 , The maximum sequence length that flash attention can handle is not fixed and it can vary depending on the specific implementation and available hardware resources. Generally, Flash attention enables models to handle significantly longer sequences than standard attention mechanism. However the practical limits will depend on factors such as GPU memory and specific use case requirements.

Sign up or log in to comment