sade-adrien commited on
Commit
b40233b
1 Parent(s): 911e1b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ It achieves the following results on the evaluation set:
21
 
22
  This model is a fine-tuning of Mistral-7B-Instruct-v0.1.
23
  This FT was using a Position Interpolation factor of 0.5 (Linear RoPE scaling).
24
- Please note that the RoPE scaling factor should be determined by L'/L where L is the pre-training max context length and L' is the new max context length. In our case, we are just making experiments (and for us we would have had L'/L = 7200/8096 > 1 which did not require any PI scaling).
25
 
26
  ## Intended uses & limitations
27
 
 
21
 
22
  This model is a fine-tuning of Mistral-7B-Instruct-v0.1.
23
  This FT was using a Position Interpolation factor of 0.5 (Linear RoPE scaling).
24
+ Please note that the RoPE scaling factor should be determined by L/L' where L is the pre-training max context length and L' is the new max context length. In our case, we are just making experiments (and for us we would have had L/L' = 8096/7200 > 1 which did not require any PI scaling).
25
 
26
  ## Intended uses & limitations
27