Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -326,3 +326,17 @@ s still a matter of speculation and debate, ongoing research and exploration may
|
|
326 |
plications.
|
327 |
|
328 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
326 |
plications.
|
327 |
|
328 |
```
|
329 |
+
|
330 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
331 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-13B-v1.2)
|
332 |
+
|
333 |
+
| Metric | Value |
|
334 |
+
|-----------------------|---------------------------|
|
335 |
+
| Avg. | 51.56 |
|
336 |
+
| ARC (25-shot) | 61.26 |
|
337 |
+
| HellaSwag (10-shot) | 82.93 |
|
338 |
+
| MMLU (5-shot) | 56.47 |
|
339 |
+
| TruthfulQA (0-shot) | 47.27 |
|
340 |
+
| Winogrande (5-shot) | 76.48 |
|
341 |
+
| GSM8K (5-shot) | 10.99 |
|
342 |
+
| DROP (3-shot) | 25.48 |
|