andreaskoepf commited on
Commit
5a7e696
1 Parent(s): 77def0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -2,12 +2,16 @@
2
  license: other
3
  ---
4
 
 
 
5
  - [wand](https://wandb.ai/open-assistant/supervised-finetuning/runs/2jfazjt9) (still internal, needs to be moved to public-sft)
6
  - checkpoint: 3319 steps
7
 
8
- ## Note
9
 
10
- In order to load this model you need to install a pre-release version of the Huggingface transformers library.
 
 
 
11
 
12
 
13
  ## Model Configuration
 
2
  license: other
3
  ---
4
 
5
+ - **At least Huggingface Transformers [4.31.0](https://pypi.org/project/transformers/4.31.0/) is required to load this model!**
6
+ - datatpye: fp16
7
  - [wand](https://wandb.ai/open-assistant/supervised-finetuning/runs/2jfazjt9) (still internal, needs to be moved to public-sft)
8
  - checkpoint: 3319 steps
9
 
 
10
 
11
+ ## Long context via RoPE scaling
12
+
13
+ This model was fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. In order to load and use this model
14
+ a version >=4.31.0 of HF Transformers needs to be installed.
15
 
16
 
17
  ## Model Configuration