{"paper_url": "https://huggingface.co/papers/2305.01625", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention](https://huggingface.co/papers/2404.07143) (2024)\n* [IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs](https://huggingface.co/papers/2405.02842) (2024)\n* [Linearizing Large Language Models](https://huggingface.co/papers/2405.06640) (2024)\n* [XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference](https://huggingface.co/papers/2404.15420) (2024)\n* [LLoCO: Learning Long Contexts Offline](https://huggingface.co/papers/2404.07979) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}