Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -142,4 +142,17 @@ Commodity cost was ~$300.
|
|
142 |
eprint={2307.09288},
|
143 |
archivePrefix={arXiv},
|
144 |
}
|
145 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
eprint={2307.09288},
|
143 |
archivePrefix={arXiv},
|
144 |
}
|
145 |
+
```
|
146 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
147 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__LlongOrca-13B-16k)
|
148 |
+
|
149 |
+
| Metric | Value |
|
150 |
+
|-----------------------|---------------------------|
|
151 |
+
| Avg. | 53.02 |
|
152 |
+
| ARC (25-shot) | 62.46 |
|
153 |
+
| HellaSwag (10-shot) | 82.75 |
|
154 |
+
| MMLU (5-shot) | 55.54 |
|
155 |
+
| TruthfulQA (0-shot) | 50.11 |
|
156 |
+
| Winogrande (5-shot) | 76.4 |
|
157 |
+
| GSM8K (5-shot) | 12.28 |
|
158 |
+
| DROP (3-shot) | 31.59 |
|