--- datasets: - emozilla/yarn-train-tokenized-16k-mistral metrics: - perplexity library_name: transformers --- # Model Card: Nous-Yarn-Mistral-7b-64k [Preprint (arXiv)](https://arxiv.org/abs/2309.00071) [GitHub](https://github.com/jquesnelle/yarn) ![yarn](https://raw.githubusercontent.com/jquesnelle/yarn/mistral/data/proofpile-long-small-mistral.csv.png) ## Model Description Nous-Yarn-Mistral-7b-64k is a state-of-the-art language model for long context, further pretrained on long context data for 1000 steps using the YaRN extension method. It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 64k token context window. To use, pass `trust_remote_code=True` when loading the model, for example ```python model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-64k", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) ``` ## Collaborators - [bloc97](https://github.com/bloc97): Methods, paper and evals - [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals - [@EnricoShippole](https://twitter.com/EnricoShippole): Model training - [honglu2875](https://github.com/honglu2875): Paper and evals The authors would like to thank LAION AI for their support of compute for this model. It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.