Adding Evaluation Results
#10
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -221,4 +221,17 @@ Please see the Responsible Use Guide available at https://ai.meta.com/llama/resp
|
|
221 |
journal={CoRR},
|
222 |
year={2021}
|
223 |
}
|
224 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
221 |
journal={CoRR},
|
222 |
year={2021}
|
223 |
}
|
224 |
+
```
|
225 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
226 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__OpenOrca-Platypus2-13B)
|
227 |
+
|
228 |
+
| Metric | Value |
|
229 |
+
|-----------------------|---------------------------|
|
230 |
+
| Avg. | 50.47 |
|
231 |
+
| ARC (25-shot) | 62.8 |
|
232 |
+
| HellaSwag (10-shot) | 83.15 |
|
233 |
+
| MMLU (5-shot) | 59.39 |
|
234 |
+
| TruthfulQA (0-shot) | 53.08 |
|
235 |
+
| Winogrande (5-shot) | 76.24 |
|
236 |
+
| GSM8K (5-shot) | 9.02 |
|
237 |
+
| DROP (3-shot) | 9.63 |
|