Adding Evaluation Results
#5
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -361,3 +361,17 @@ If you found wizardlm_alpaca_dolly_orca_open_llama_13b useful in your research o
|
|
361 |
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
|
362 |
}
|
363 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
361 |
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
|
362 |
}
|
363 |
```
|
364 |
+
|
365 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
366 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__orca_mini_13B-GPTQ)
|
367 |
+
|
368 |
+
| Metric | Value |
|
369 |
+
|-----------------------|---------------------------|
|
370 |
+
| Avg. | 28.88 |
|
371 |
+
| ARC (25-shot) | 27.3 |
|
372 |
+
| HellaSwag (10-shot) | 25.85 |
|
373 |
+
| MMLU (5-shot) | 25.31 |
|
374 |
+
| TruthfulQA (0-shot) | 48.06 |
|
375 |
+
| Winogrande (5-shot) | 63.77 |
|
376 |
+
| GSM8K (5-shot) | 0.08 |
|
377 |
+
| DROP (3-shot) | 11.77 |
|