Yarn-Mistral-7b-64k / README.md
emozilla's picture
Create README.md
d4495fa
|
raw
history blame
1.34 kB
metadata
datasets:
  - emozilla/yarn-train-tokenized-16k-mistral
metrics:
  - perplexity
library_name: transformers

Model Card: Nous-Yarn-Mistral-7b-64k

Preprint (arXiv)
GitHub

Model Description

Nous-Yarn-Mistral-7b-64k is a state-of-the-art language model for long context, further pretrained on long context data for 1000 steps using the YaRN extension method. It is an extension of Mistral-7B-v0.1 and supports a 64k token context window.

To use, pass trust_remote_code=True when loading the model, for example

model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-64k",
  torch_dtype=torch.bfloat16,
  device_map="auto",
  trust_remote_code=True)

Collaborators

The authors would like to thank LAION AI for their support of compute for this model. It was trained on the JUWELS supercomputer.