YaRN: Efficient Context Window Extension of Large Language Models

Published on Aug 31, 2023
Β· Featured in Daily Papers on Sep 4, 2023


Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. We publish the checkpoints of Llama 2 7B/13B fine-tuned using YaRN with 64k and 128k context windows at


This comment has been hidden

Any further models that Yarn was used on?
Last update was only 2 months ago, but fast changing methods, maybe Yarn was overtaken?

Sign up or log in to comment

Models citing this paper 58

Browse 58 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite in a dataset to link it from this page.

Spaces citing this paper 36

Collections including this paper 22