sade-adrien
commited on
Commit
•
9db76cc
1
Parent(s):
c356e57
Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,10 @@ It achieves the following results on the evaluation set:
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
-
|
|
|
|
|
|
|
23 |
|
24 |
## Intended uses & limitations
|
25 |
|
@@ -27,7 +30,7 @@ More information needed
|
|
27 |
|
28 |
## Training and evaluation data
|
29 |
|
30 |
-
|
31 |
|
32 |
## Training procedure
|
33 |
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
+
This model is a fine-tuning of Mistral-7B-Instruct-v0.1.
|
23 |
+
This FT was done with full attention (removing the 4k SWA).
|
24 |
+
This FT was using a Position Interpolation factor of 0.5 (Linear RoPE scaling).
|
25 |
+
Please note that the RoPE scaling factor should be determined by L'/L where L is the pre-training max context length and L' is the new max context length. In our case, we are just making experiments (and for us we would have had L'/L = 7200/8096 > 1 which did not require any PI scaling).
|
26 |
|
27 |
## Intended uses & limitations
|
28 |
|
|
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|
33 |
+
Data is a 9k sample from the RedPajama datset. The context is <=7200 with a decreasing exponential distribution of scale 1500.
|
34 |
|
35 |
## Training procedure
|
36 |
|