Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,8 @@ We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolot
|
|
23 |
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
|
24 |
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
|
25 |
|
26 |
-
HF Leaderboard evals place this model
|
|
|
27 |
As well, we preserve >98% of the performance of the OpenOrcaxOpenChat-Preview2-13B model we share datasets with, while extending the context to 16k.
|
28 |
|
29 |
We did this training as part of testing setup of our H100 cluster.
|
|
|
23 |
This release is trained on a curated filtered subset of most of our GPT-4 augmented data.
|
24 |
It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
|
25 |
|
26 |
+
HF Leaderboard evals place this model as #1 for all 13B long context models at release time.
|
27 |
+
We achieve >112% the performance of the base LLongMA2-13b-16k model we tuned on top of.
|
28 |
As well, we preserve >98% of the performance of the OpenOrcaxOpenChat-Preview2-13B model we share datasets with, while extending the context to 16k.
|
29 |
|
30 |
We did this training as part of testing setup of our H100 cluster.
|