emozilla commited on
Commit
d4495fa
1 Parent(s): 0069acb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - emozilla/yarn-train-tokenized-16k-mistral
4
+ metrics:
5
+ - perplexity
6
+ library_name: transformers
7
+ ---
8
+
9
+ # Model Card: Nous-Yarn-Mistral-7b-64k
10
+
11
+ [Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
12
+ [GitHub](https://github.com/jquesnelle/yarn)
13
+
14
+ ## Model Description
15
+
16
+ Nous-Yarn-Mistral-7b-64k is a state-of-the-art language model for long context, further pretrained on long context data for 1000 steps using the YaRN extension method.
17
+ It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 64k token context window.
18
+
19
+ To use, pass `trust_remote_code=True` when loading the model, for example
20
+
21
+ ```python
22
+ model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-64k",
23
+ torch_dtype=torch.bfloat16,
24
+ device_map="auto",
25
+ trust_remote_code=True)
26
+ ```
27
+
28
+ ## Collaborators
29
+
30
+ - [bloc97](https://github.com/bloc97): Methods, paper and evals
31
+ - [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
32
+ - [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
33
+ - [honglu2875](https://github.com/honglu2875): Paper and evals
34
+
35
+ The authors would like to thank LAION AI for their support of compute for this model.
36
+ It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer.