Papers
arxiv:2404.07143

Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

Published on Apr 10
Β· Featured in Daily Papers on Apr 11
Authors:
,
,

Abstract

This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term linear attention mechanisms in a single Transformer block. We demonstrate the effectiveness of our approach on long-context language modeling benchmarks, 1M sequence length passkey context block retrieval and 500K length book summarization tasks with 1B and 8B LLMs. Our approach introduces minimal bounded memory parameters and enables fast streaming inference for LLMs.

Community

This comment has been hidden

Awesome work! Any chance of publishing the code too?

Excellent work! I'm curious, is the gating scalar Ξ² the only additional parameter that requires training?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

I'm working on a pytorch implementation, come and join me in the repo if you wanna help
https://github.com/jlamprou/Infini-Attention

Here's fully working implementation repo!
https://github.com/Beomi/InfiniTransformer

( @glamprou 's repo inspired me a lot! thanks ☺️)

Llama-3 is out!
I updated my repo(https://github.com/Beomi/InfiniTransformer) to train Llama-3 with 1M seq len 🀩

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.07143 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.07143 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.07143 in a Space README.md to link it from this page.

Collections including this paper 30