Update README.md
#1
by
Dhia-GB
- opened
README.md
CHANGED
@@ -87,7 +87,10 @@ print(response)
|
|
87 |
<br>
|
88 |
|
89 |
## Benchmarks
|
90 |
-
We report in the following table our internal pipeline benchmarks
|
|
|
|
|
|
|
91 |
|
92 |
|
93 |
|
|
|
87 |
<br>
|
88 |
|
89 |
## Benchmarks
|
90 |
+
We report in the following table our internal pipeline benchmarks.
|
91 |
+
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
92 |
+
- We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
|
93 |
+
- We use same batch-size across all models.
|
94 |
|
95 |
|
96 |
|